var/home/core/zuul-output/0000755000175000017500000000000015135626560014536 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135642234015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000311272115135642144020262 0ustar corecoredDwikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD 2|i.߷;U/;?FެxۻfW޾n^X/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?C}!w,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2ah3[n )ܗKj/jUSsȕD $([LH%xa1yrOH0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (7 _:+$ߗv{wzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHc<槔w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF 4BIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?z%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>ue\0zE|!@E " ;9Ώf3kZc7B)!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1ko\`\Jc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIóqTĔOx/Wd**hг˙r,3l'^  [}r?}W3Q#vS}ll>ŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z+~u~ :3%1>"=b'K=}|+: :8au"N@#=Ugzy]sTv rNZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ cZFvbRz? m is8"iFn$_HIv48Һq{p#CR"&IGI Ҙ'U"=O6Ⱦ*L.9QH6E)}U9}UEk=W(O* eQ&Yoʱ4UQ*rp(R[V1jVMYT̸=J.^!R$ SBe~o D<[Kq U)qG8 U=rQo!]*,) Њձ\MӽL .0L~=G4'i)?'-0vy?I7qu"p.GV`HoA,LK<#s' ;M])Cn 9]| q8s׻4tu;6P @|sw88"8fmM׺Z^KMp-b}QȐ" l|2i[ {Dsh&kAa+=(kxw47*tMs=%Beں[>ôGzq|N7]K7=s_"F$T趼fǘbK"ElYV7lvdrݽB^5Ο?I^_K,%q=qc3`Pt1(?xS+6$P@&Qz]zŸawm̓RkE)FO<ž[0C2B; bmݴ,qa~sl /34!K3 (pid(ΗG2;lMU2b$Ld^](xĥ _l \\~X7c&"=eӑXJP$y!P("qQSYR V U3٤ >1B6gbPXQ:??UG"`΄꺨^Nn.Q5YuQMUs]r @E*?栥I 8uW7$ ST6-"}tX DF'7I]Bg5.H}kHcǯOZ!=`zꛋkloE?3Q7e6=+N:Ar ݶ҄3@lNgV7{q :IBv$0ՇA]o,gG3]I`-b%Wo6 Oی4<;0 't`y[ci/hgu)Xv'/T{a T35%x+AeSUSt+IU) C NdS>r_!>:;ԥi0+I. `?. klB&8N4 ` ^h`_N7ޖ)rVT3I^Ir%իd'^s40;̠&tV38$ >3M;a{ d 4(2-EU];ro9˒ƕ ZT'ާ.[Ǯ >⢕ ,;U) DoL]{7 l$G<&cz6SR \%aSB~օ+ӲU4@<V303 LWuvl'ySF2{}ք+>„'tF{b}M-^9vί| 3'L\.Kb`ZFhcoToj'WۇphGLJng@R%ӶŸD ep931>O*Y^n8o?nr%7S2vS蚺X$Dަo?9$*fOg06@K|?n DY&E">>aFl7 ƚ3+ ᪛*9*-++px'wf&u4]\:cj-}t)G`N;9X]eCBT.CNaCi3}!>7"rY! }{#㧃Ʋݺ@=eQ3stæ*_@FMHa6G\ĬIC*}1W"Py7<eCG Ԟp%UMo:eIFڴ7[`$)V,wrd2 a`*ϖbHYcWIa0kǂe4 yl麕֒i.K0`Gi{:s0ťڈ}jj.\tl*q!D]:4g)8dsrg,)hMZTQ)Lž"n0kW _nxq|-iUHA6*ί,XFjR/U{E9Oiaj;hѪ*n<@m -U4ft3ط3;DSyy.ͩ kF!F"U]PDMNZk"\=V$CHmݐ`OjӵV0|+ ԾaCK(ت e[n-Q5LA&4 j%(1mZDKRyv*\E "V(ZRɴdwvl%!MeLk*x{jqTu_D]5Qa7< 2_D;R<,D*9[hU۝5JүbsnSpVuגrTRfZaFCsQ*|oà [T40|"Bc]cXV,[͘/-˱vi g1аl'Cq`Ƭ vZVy4l5+obԐ5pZkYI8*xn@at0$z !nN(iw܄r*MK4Ut$]~C&WWU"XE3+!ݵq<`љIU,Hِ^% TĖӢF]ِXS9ʅmZ@t]j9e>%\ FUtj' 8X&y4HU.2Z*R.Fn;Xf%HfұD*W d`8_&Mܹ*m[ƌBW(Xqx/}wIA^\~xwBntKSlˀ? x%`!TIOJϮhl?ܖ%[J C#ud`!ן'gsn ?z|`]x_lJjB6tQS°`(˜U:ӑP)hޘmt~Az9/{W(Uo=c@P/|lj'3ydvRpŽmǥn 0 sS HuKjuK$(mI|P @k^-ȉ 7g*Le*Zh1W>NY0YR^{v2}#:=t.PHٶ@p )Dkg0Wo8)O07?x\{LEsг@]q0_57<*UqgMJ'^|Y9`3cu[e5&RyG}|Tx_: cE77wu x]Ae[NG<- 4,gCC@I# +;yF2iuNd+kg]\7B9\?wcc0JۉvD"qPݑME/x!aD,k`ܗ>G^:owœDr EvA Q[9Ұ [FDXW:G+O'pnD"ލ|n;ۊn#ޝTíy!*ĈCt$[Pv#^%xvwR S{17hH Ba]B g%Hf֑@.p'`5?7+:Xtm@k#北iiZ$`Kw-h@ijy7"ߵAa?=j}wfu}y:[?cq{LC+nۃi9lc۝c7Nw<;0NVF˖#sf̆9lՃu?i|bɸ6p0l9bl8eMYu/iϠ-0*kHn Nr힗+{qF& 튋 gsYIџ-s943i\X] 3-ys y^k?o; bi1@kĉYZ 'ָp")gDYc .Vy<8/停s l%6pMO @nl&JMHr@ɞQͿʇ`̣L Q>x}kW>i)p6 ML^y`<3o`Aח(1s~aH/XRP/h0 HN Xf +k&+q9ϊ`0`f }Uw]o$=WeUY}H G3x}ۢ}ۜZ~^y"Q$AwoVU /-Ñ+rޟ>f5Hq-5'&r!` >Afsmm0E ^9O4k:PiXTUt(QR^Ur( )#l(ny2{J=Xdh騤[CK8 ͈tyL͡rSЋO( NWq+M2b"'!L^X/!4`AXjy^Vժ½ k[ R,^O}>&*5 "KmXaH_ΝF5O}߶R"X}85g&8_  8Wbr>_/o X꒍Nyf7rHŏIVr% ,|>%h7ɟn%q|&=:NÃFU o~qvrЧ?/.cܚ ~O1z }$Kރw)mު g fC@Y2EDT .gnDGp8;l" <+bo8 ⌛hҦ( aC 574ţraZ3ƃF`J_=bLmP Jt=eKxn ʖfe3r|P7䁽+=riBpy˦4L4bKIV7g~KF0 4\`шuYE` B,CA?`, RhJYwYgVM{Y(*6u q}j5Kg,`(~a`ji{ci. tcQs?+l,=LG[J/]8a8A`'gWS >s2[ _( ,`OgWY*a 䣀>,3a\+j+kZF>^O n"#K;TTrɻ*64tʼ(.ϞMq#.l (@ QKՂTj .ȸ~/UߪkBUBw'~# w Jw N(}uV uv ٝPi:$فPwPwB uFHBeeʞF({$lB*|BI(߁PJ؁P;iG*6K*sLVotu 5fS"CS౲ՋP{`5lW8f`CF&\E1u:s]dx1{)psHr%p$1'nׇ} n@AoַݾeTէd쟻iT!zpN^#<-..njᔀ]P)f&X;8 |X_9G."*gy:;6*ᎋ:7`g*=wp_\?qYy :hW.`(err|Ah{bD6sMn` \]g?,Lw' K~ H9P#BsAU ve$(o E/`"~FڏRdVZ^e~=RRϗׇSCEd?\?$Ru"Q7J}x&d.eSXI"ObʒŇVKnAw_{(ˣ B Yg6kx|9, .Ic=1`T-YG)ܛHî gkO\p,.1!0Qѯ,TTDw`f~ ppkfd> oCo0JrV8J'I4Ӊ* 6oD Aĵʄ'CxI64.Qb{ƈB0J0"w>rEłj|L JpU^<܋,)|2ɸ,^:ì2j k0}Y@?EcVDRN#CYG՝ZiY`4^Ȑ?dA͈{A0':EiZy5H~9j5~ xT{-00#SdJ&mԕ@% A 00 *֬`Y`e4W *}VP垕.'=Njf dhtenltb&tYƓ(.'LJ">_ F2ӭr2R03 `^ҡ? o9 Kq|(H{h#)wx7jycT@e o, o'O}Om=5;Ԭ#}^#0^}pikԳU۞h?rJް;k0$lG;[jj*ooMF9PxJgQyfs(? "NNƴC/gÛ]M-e=,-my7d0ANL׵BYɻ޶Ws*uSE2IK~})RCN<5S= 1lY*][>m-3>s0ؑyO㤣iTlL;fI²OA7!dtH1.p,z}1 Q()Km,*䖳M?# #hƦٮ9;G)XXX沕YiR̒EFIDIn(p>%Dbb LZl¶`7.G%6Ͱc:e4L +)F7avPp[iFo,W&lHecuNk9\ $jlfcE"Qu 4FQX.p@BYId'Ľu.\g`$S6|i _ZQo+B&O8 "T\Q!}i$v<$83Q`K1I,IoF<Ī`R+xqFBdktܐ U<͟au7?# `r՘n BZl!uej H#:FHjopPkktyr=P,&PĝbS3L`znprzq>ב~8kH4*qx 3M;F.3|> BgAko`Uh0)&LYǩ$F[/V'UNf873-}}&Q[XiUHӡf=u,.2q #oc(w ((pRy*7]i :%&Þc~;N:TʀkZ}le*ˌ͵9yF{#$]܌_͛!ѩ_>J.ߛIHr0&La e*J*i RG.tƾP(EqOd"(YL' kXް+$(fe@R6Uyn=NYSt0>*1&HP\Qt! &oF,fl< a$s 9KBKcDC+)bAS #uBXI,ۋ݂&9!b*Rh2FIpBLiƪ ~a6œt_yHG?G.bREƠ(ńUhDȪFa UwUM1+[/8*noˎQb6\|_kU6 I0.os$8JuUQUܔ 1_Y/+[oBR/HCLIWkGp[kpMnخ^@WyQ =9ù 5% 5 _(>0=g:Dto.';bp$XՎh0hH".hi I^lfRWA9݃! qV.q.փg![6ppZt,ND `xjSsw!b$\>jy/.zZ78G6N3:I,e=[tK`4|V^:%*c3Ů6%S3OR!k0.;ms Q}8*wq C}LDǼ9n2GQ2BҥWc$8^MuՊ}O0@u@pd-#G&m/7fZC̢цU=%o1 F YK=,e& _/K(pGΐb$񣍿: 4 ⷲرXV1i+%4>%?yQL44#2хtY9aE2ka] D➉F] y1YʠZFfZzH`m2 zrcQNjRRkg1h0ˆjBüM3Ě!ɂ5Δ 1S.!V!3\I8%ͥhUN 힤,o*ja'\i[Wy1%!`[ ^z3 ]"|*EζƵ` 1OK,Y4b$IBǯ+=}xص7KFD,51}r,,.׾uFj#C N#<wYڔa{;)(Tts!p}I#&C3x~{}1\4߅fpdt y`bpG@6&mQ2#2O f:$LB'0 !OuY-]&j~͢Q\k3WZ+YK=2 `M"ᇑ4Q P&} GpUS5fQ̹xuedjYH@5O—fz{EߙY7uoG5eCXr"d.E?~wE\!t{O/qiHʟu1>yZdP D9Z4JAT>xa -BHdէ:Syom Mj1p"7`}sAvKa$7JF)B1*ʹ829R.Amd9I,Ų/KgݷM'ZB"m<+й\%J$ܝB-dрgV2H!H#Ga u~fiOkG\G&fgi#Lq-G$.x87!;zҡivYX OXJ=]ě1찜 +:c~XzozvZ:Oh)pF|aFgίBW&2jNh$JUm=P[w V!koh>z|\Dsc/T`F)sK&0 =4z8xܑɸ f򝵒:{ņaEr`&p$e䤤v}{`fEgDN5Vf2+@Y^$88`q^G>&\a"{ցMp1s]|Zm6ssG~O"jw*6x\g dobJ}>u}UXkPIk;UMF1؝{FWE@tE@t8xcnqSi(SPII 9B4IǞo@$nңb}{n-_Ac)K[&h!*;C*.&OcJ6^C W;Oᘋ4)PSkp#k}k& r#+o."4^*Uz\=ߗGChٙK>m<ܭq5*nzc J ؠ^@OxlHc.eqL[,(r fCQ[Ε h1j5hiZVyΓsqUJ]WA[}_^H5xxx?=Y;/µinEACnQt+9-RN,&^CH٩pCj* \'8=OB~BHi pi>`I >VbxE:1 xx&.qX咽4o88Wݼpstԫ5֒gn nM~1l&]-4czs{sS9~ˬ89qp|lM'oNJq|uqn{mdnm yoV٫"$7oo[X͏}>ͱ׭%=b=Oꫲ+}vc)ug%$HWH⬿=^DZ8E 'SgI`o'<sImj ɀ1F+N[0IyUSӋ󮝒W]O8؄ ^>8}M!];Yي{p޼%MS~Iи[ NMCp纸{>ojt$r줄(ʱmo$ ܣwύ&&;ʎb&Ԗ1kqE(Uh&[^1isWF(U&M(1Ita%MV&EfItSMf%F .W=XIhsTy,4ӘN2X`&DI8L#^?eS>tmZlj\X;oKl?lF'?~O_CGVNҼYHuyQokg8Q]%PoxC}[ɗqZT2sm6VN6뿶n`=w o1LLՆooP'C;5zi\׻i 7'< ػ6dW7ג 5n60:iD-N[=39AșUjW @~@@WL?û֔Pkun]ܓ} hfe23Q!a\>p(u#R0\:!i &#RS&pJ9I \q n?;w#|W3z?L[m$0]kc\Fah&nv<}2z,bZ9JY,/%ku'y3Z|-GJ'aRҊ~te8L#H!)jrӀg eYN0}0d%pbp?OAw 睷>K$lF Dj2Si,)5W$cHlV N)s 6R`pp%b{:[6\>6tx5]Zy9#@ƽ0]+)̜w9_~x}鏯>w #\=J$m([Q|%F UPC)Be+; ɆTLW e]+'gFJV1`|T3vKKgW4%KzcwL9y3d1qI#+n+yuu谅 a= uڝ|* f t^Owe($bw| DgY8z٘B`Z/縸s WFWcsYΩR}\֌;g߳gUm.L,\ 7 N)0 ^\}&vO'_jW?7ո7D.[DZJ8k1k߶6viE_D*?9CPy}mj?M_NJEo"fhu6b9f99U!٦Μ dKr> `FZ;Z +91wNpe Fe-5w*9i*\)Bң -I`EZcZ6У͔qͿ, Q{{~`f7 z }>T"n;u~WE/RZ/b SXϕ}wXx=bH"#|~ n..:E,?;[w8s1+^1fZA6-;IUvs]}Vm^fB2gzOW?(Oಳ( @ӏ0$%-b/I`U-%o*%S2S7*,Q▒J?eTMnfԼk8\uEULRT(pլMjDc-V#Z`4Nޏ繍de @j٢WI\[dEAZ[Pfi8ɯx`H6E[c@Zڥ 6iC^1ARr{[=C-sZe-0Es6|l6I39h7|iIH%"Zf,hS`~Anp}=+4P77ID~W*huQ]g%=L:S*S|M_St5O34ytBji`˜VJYߔ&Cg-;y{Z_Jyo`EUqcgM1ƋGc3-92Ky4,wR)ˉ u {,z>]0̬FH\RqdٹXCBFP௝'mӄSx#Ƶ|Gc{#' {)us#b]M6Z]9G-׻aI\}b:{{2Olfp/{hK"73_-2R=KTQrG2zRf%u?&{O(9YkV$Tt]p}qCH5Q˃CIR? 7={Rg1wB," #xݺspNXV ՍzU RlaYǍW!r鍧տ_{T|ⓩXl>HTn-7fdg&^칲WV_y'nzvkUz "NP:\@i+~(Q0{ 'Hv)TzM`Is NLz)XVaR6s]]MyKⰱOc5=Kj$N &h //N/oV6Z\si(x)4i^4"0|fˏ amu P/"Rҗ]5 ۿHXK)C /ҍ <ǝ нٗm|%_EQTEdmE#~Yk}kQz{҈]g(^Ro4Bݻgr#x.+dz~CEHT] ^?3.[PEۺ촜 G]IN4{'!k+DŽ[ ܵv~bmMŗݎ}6ag܏kdgg#E'9UZ)nM$.kM$IF d;3Zt`/&g^X|/ sդϛOoCeV <{.yb'CxLP{HkeK%RV!I<2e K/)^!-fYͅ)~IO ]g>Eݔf sX4[/p4e`}(>U:-s7fhif`bzn`B`؅`h V&epsYO u:5iG4iyj.HncE]e#^ YAFps>8b+N%+# c2`/65 ;k ieEW`@3Vuh_uErDLD"& /)A )!Xh' vcoQ#oy*+sp&>F„>FJDXxBc[BIE{5 ;p\᝹R9ʪK3&PZ6^#yt\)2?%R{tk@+6BΓ۳gU(( L5hA>LOh ] =eZb,X4(tl$/3hI[g-6͢* rxk 51ML"wS훛Mf0G Hjn[l5UuH2opCPRk;8]ۈ)Љ>1p;i78?ͽ"4~ډ-F 豖 $'@Eb% R{nip/w].Yi2W_t3 $M-K6@;1`*45g1,%.Ҏ:c%n톏J&=Њ4ĤTZ4 /y1 VG> `YrBj<OVHD|VA[Dmхr1mZbIZߪ$C hQ aq:VY2HhQ)EAnG /z|QK֛4FiӚV:x#[a=WQ Ac V`8AG M]H{ΰZ61pL0hM&QqLkqHbη 1bc|4=:kĉ(cWݔ4Fa(BEQdUw=5=%iK?Oa#WO Xct rVrBe}%ma`Ic[͓S$# &BK> vL(&~?/Rxľ5'ɻ$fC,tg܋%Wv{GȘXhB1lu/&:h!B֬TpcFCTt^JB>{+m9Fd6( !ah4.!:a+5A S)-jDb'dҁ\❒M14hzTmk& tL8DtG\rQZ!Z `1`dHϒ*Pxk>߀7l ?9[hԲd -ELlK.j1Aݒ:"}C}jf ];ag)DNLPQ:KjYtMjuQ'Sx>HDwC"0`J~RкZzJ~ gLʷwϷdHo*كy0 R [ SwIA , z[/#RH%ݒz//r%nw9Z-j 7dx<H۳?(Ak{q[6 $m{IvJD[>j!żܻU%BN^ DL)!̱ZrIH OF{.FTnI[hrR>GvnЋLI wxr/jtO{ JX$a=)ӕ~>ǖ%`vi4ˠ\>_q9:崳K;, khkVi>znq(r't} x7=>`:6l! Z)ě'H`:g9V8w+iDmXLhLX Ls4]{\q% NL}?è\s hK+`aEr+Gm2-%Ƥ_EMAAǁhHᡎ Tb2iJ|Ń)բ$a ~z)cJc<|$`8N$W9[9IP'S֑l؃udP Taxk@wz88[pP'X~QJxW/vk 2:r+IyΦZ'J3xyM|jnOr[>kZ=EE-IsL[Ɗ1Acm+k7gB^o7y҈D~玗c+5 cUͫ=e^sm{ģXk7w@ozU'u^x+^k?֘LWO۬!{}Qjp$d4?}*ܧq '|6Yy8gY[bhM#3xU`>쾓'ManJ;|{G7K|Lk/ۘ85&RJMЍeɲ}*YY ;X2Xf64 7T"_ ٤"0}3`o_O; A˪Oaq~yŧ8\q]<br !cڭE@w;l|Ԍ@=Ifz|.fmѼ#ɿ^^&&*ܫ^< .ڠu8*J,/oMTsE)Z+8_]Y+w D4zasr~3l4-kJ#?k ʖ69.$9/\M3Hf|6VH6ߘu;0"ܩJrۼ]1s arvx5M*|Ú@zgcPye,|zUSd4ݷ53oX#B$QIF텞 .' jٷ;Y]ƚ 'areO79<<ze&ҡ͓`Ӎରx cjh]D6S=Nγ3tݤdZQ$WoMe{㶎"S0Ҟ+2':lEUQ=BV6>€of#m5 )v %cC&-Z^Kr[Ore{Qij:)>ޝqU`V fӼH"6{Ke /as'Ul#9nsnvj_-e&' g)>ol]o(Q-T ! #}YNK.] c)7z3 N^^=x ]… 3vu?rݱn"t23svM>f5{\"VQ[#m1Ķ|K-XCcw^CM?[CZU ]Ckv֤g䈝rFyfB">{3 .,W=Nc8@Gxˢ'^c{ L*='C\hEn1s[L'Ě2xa{!z]"ݤh`7Ïc0) [64GE/}Z4p;3AbfsN*mZ^>ߕ\6xd}|ebB}ָ@n>LA5 ng3[V]A,9|[E>2gťl*3%cI|x1>sI_>Გsc,atr3<0#\$ڈ SKvx l?ڠ#dlÿYrDfR'ivɜJl:9aD0ݯi8ޖ)Ǹ !csQ:CRY+@:RL}>nyn> OÛX,YR"'oE UnzU^!HG N_Sw. Ju~fq145|k)+yrʲ LGjh(!nxߗwe6LJ@LTalTB[>IQO#s Ijw.th'靻k,_:O0|-(r/hd ˻7*wa7ŝs_kxm/5C%3+,_ۼʊVu<*YNf ٤ޚߪ_?/ ʪ&{=v|9 paf*ѫd٢$YLW-JB $: {2t ^=(`^ q4**A-.۷%q >*?[ݽ>(;ax X[!d_?^kV.&~0K 1d{:C c$$'iĥ lE딥t:0gM NVfݗ8?4GO Y)mI+9 @s ۳ ;[ȴ߇+7VvWE]**[g$%+̓%Ͷ6]KR4d8I2LaօhktdQ1G ՑIQylc1,2 xBPcRIFXD hF !Zc7$߰{ }g^T{C:ל7+EG ΦX _ 6`s(9HUchaD\;!&Qa1kq.i-6mq)oG= ÕXm<_c9X+U?)bԻ%F;8vٞf $)ì gZcLT}hzwA93yK8G9[\lFΖ9|_0n? 5߆QÇ׻2 MM)'Uc\  T F% Lxc9|)kz_<'p G2:o7!]ANg^$-~Ү'»dnB _dpNIFmK#E#PiKAb#~_<ú*LdZRax\y&aB#iK!׬{ IJPE&j2j8 j4P p[P й6J9TC3: g?AMY?Fjw9~o](e'Z p&VqM"_z1=t&ijNx0鈦21eTRPE;Pp)S|4֏|g9Vp6Z`tF0OqX<|Pu[& C@;2p խ.1t\ #̄cIKavf7뙫aHk='\ļ|Ff!A?k֪⇬ OSx /\nNbGT4༖`K2s'h*|)]>sNPQ::Wg D Cx5hn\bQ ?KY;—F3+DS[Q (,}rPsNd0{ )֍CJ$.oݡH29Ȋ*0X\ˌڱRlSqD[Naln5\#_39>6_7K%8%+u(W%O4ʢ1$Z/eѷ.\8޳`zگm/׷y4>1(Aa AD 5lT> 2/%/^.u,;4X8$5`B1.J1ȗB k7㲜efcы"R< 2 AN#_ś|AL ƺ0ޥCKPOS`B#iKVrnoY4{5Ֆ(-hPHF7W y10? ǡiR-.a:dS3vK]" /!} TՂAuP\C%!!å ݛUx|E`sҌ;Ɲ+5P蜗8~TSc5E"_ bT;I5AQHSmL=ZΫ.yzVeΟku5 VՎs*?φ*O[<teK]ah֍[RhEcETPÔ&`l5oѼD*yW-cFY4vIKAcϷ~F/Ug-"~I()Rdd<'NjD R=ks7ogJx:/v չ9Am8#!OL!As<^(K!erl-MZ`֭L3N)7xw|ox{E zs :Γ5ak W_&q @pӰ{7ɗr{g{n#Y^ʇ; <ѓsm FǃY-,A4Fƥ!SCܐ"7$O Og#4´; C7YE}lȃ\Xm Oá/p4z=ZxC"L|~)eX)ȗb%m- ˲^Kٰ _; 72z6rZV׎[SUI938c>&/%̖5L9\>Es9TX)p(TLs C)gS_r@ u1a3zR"2:tXTێni"Tj9f9h9\RV?>w+ :JCT {2rNA&$򥐋_?zFʝ̈́g!&4E,\]3 Ld3CBY2$h6lNǩNV9ZecZI`?a) )]2$u?Ҷ9lח\Շ%M82meϻoFa$qGs-T) +y9|1XIG`1#/En H w SPFqL]N%$D&"N#MAX=$0QO`e^FV<#nS`šc~=>mW _߿6\_ ]҅З-}|D wF\~No bȮ/(Z}Cu,F*?=,akyږ;Oazb&kO9ܔ(/YlV؆(:u|?NM1!gt<`>.4r"' _Um!С%erQ jX!!_zQB!_bs)PbMГ@eЙ<C@@hG"_ s;׷3j\(Z2:ղζ ai-awstIN^E'OTr7LW[HK ߝ)1dlN?mW+Y=hL՚%7^"~R!8S.CK"&dB"_ cHaxf9ְQǔTi嗻_ֻ-PFn8/c ,%cI]e'dta1sћ̍mx_1OWz'w9]O4yO.scI')A'&g8.RtZQpx@6ƅH`^p ]Q欇 g!K \"R?7 Re O]mQ`"ΉRh}jRDr6qV0,eYBdmvTonam*5(;\";U̡e-L(Ǿ6|)]W6hJ2rsȳZogT =]0p>^ #N2ȨbVw=8 akN$js7 S%fWiI|{n~=@nBF<7^&m ކ<|͚n&aBfǗ 4KbGTCwgΣ.V%|Mhp48IX crnoU9"gw.])cW_67oQz½u匑 D/ `D;`P a 98FW dB .|zhszB0*i47.(iK&ohRC$uszºgKº N_܌(,8sU6xgSkS<'s2"qDrp+Йt)sCT $]ɻFr#W?xc}<ٻ!)u77Ȥ(0ђHLAő:ħOm10 ƸťWzJś2`MwXP3 >L)(d~,ǵvt[;A_j$ɹ r׻3jM,N\b*fZLǥX?#~G_䌗q_W}FG䵱>!% rrOp5%=Svob hsUoaO`!=0̥8CI>j2A}1b}xl&'a/ѐyxz)Sq8zUQ]Ȏ{E-UTExS/~VOzܧ[I&F21}(O`pYM9F*%,kUWw^z"`8jE_٥Z ]O풰W9TρSFOg _)ݟ '+wjtds]R/["ޅ4 ;JP.\+Q2VF,ԬB#REͽ!"NcDs:8fV"8l069`ɭ3jHV}PsT2&a3Z[%~k(QXql18ڭ'ث#T>'ql>EpظO1}DpYkXK F']ݘ%#Lax:_ɤyzLH~+C!ó*G9Yb_?/lK]ܨ, ]DimgXŌ > tH*Ӻvuyt 6Pz[V+ɉsrbCb?rҙB!.N;.7kPħqa)¶Kr 8g0}D2gVnŏ?ٯm0G_Amټ;[%ftN ŒVGPYP2cܛLiۨw}•3e`]A1vx_vK]\6)R&<'Rӊ-Eto`F⼐&`]qP>vۈX,r Gvj%'_}wmS4#~ ?pGϮ C\6!c̤k7|B%'+T6LP]ЙƓK8lrI䪮"<0#n0ķlQ ` 3$MTҘC%L%_2ڙROAjPV 3IIB(WN bɊ n#=xbC؄a(NLRagg%|NN4ˣ匽1dS-j!T#E=B(eu 1^U1(t`2f_mCdc%[:gvOq*ƒSiA0D \K{^iw@8foEUWrP:dr;⑽&SR[zHz|gA歒:?"|u?yo&Nqj(s"bSS{čxHY>:` sEbYyOm{h1i(/#*|nr^>F.^&{ J/`Vusή6,p¥J F1 A|J5bD0ۺ}qMPC]jJ%@ L=fcܡE^FEvD]D LlZꦊ?<>O_XitiwY{}٢rO͗1`>}q1jM4K_ f74.2Q=@HWjv{M} 3!%8⡒$]~_5іxK4B`]e$e4P\֛l)cV9T~`-*ocm1 V+*q_x$y:C^A< P(bSBZۊ:&%zn]2u+׻{+H6bbNc1OBiF/UO"=c Nc>W\[:5k-?"՜мZJ-OWeipד da^xDMļx=}hWwCi rdYZ0=]2\N՞w3 Ae"1Kȼ~>jW?F/fx,D=C?}(|M^]Q<{Ȕ+Br.GGwu΢8Zz{sB3-Jz5"2:OQfC|2]e:`y-Ӌ$,7@RdaGh6#`Xu)EQYJ+ևU,ZUR&)C*LcmR~k͝RZ, h@|C8kxk7 b'mA=D.WIJp0T݇$agF^Yy~+CE :{k<;Hysu o*{Ue0^eKD< xx:R+-bMWZ qа!W)Yzf*VXRAylω_$J 5Ay t.yR7/~x8_<}2iO/Ѫ&b"Ԏ{MJӜyiz0=7US;1BTNʸlI ShRk d<o~i^+q(׎jJ7mkI`VY!w4Z}9#^@kq$/Is:0)3XHVՒV!KZ*ITU,el?,:szc8Q9/,~4*`6#(+sm"qQt̕ym~w uu Ts@<96*ot<\T|\UE&vϲ?3jVX挒R>7ٓ\j~2Kj _g2A5|.~ϸu2[.~MNsR_nnk*]zD'9` U7nn&0K7Z==$b1_MUeOvi:)jְiT_uZ-7 |x'XiF'_l%_s/4RtFӷ3[Ξk!8'Fg^lX͠$6<&*20=֗+ @Ta\ W\c\a]9/_ruvJ4F g0ɟN}Vw*6o4c|e3kS<3yYaܟ(C%Ix̛[g@T$5تk@xy6S. *9mBZQ|P[}ohu^zaC\Z4txҢpESN׌"w7qUVi3H-cKe(^)JTD[(qAXZ A_&%ko *I]Y;R5y);ɗM;cgR~4dҨϷER Kr{+a3Qjq [PB:\0eiLި-dmKcCMÿ*aa5,VދM7rYQ2]7洤ރ=i?Ue2yt.Ǜkw |reZ+|Sko:{yc?YV:}1Ìd,fHW*ŀ|q#L4:PQQRpč )x KBU \h0Vs=1Lww.Bw^/cР0mA6-=fYΎi8[Pfyʕ`[gS7O#F\;fpޙaygfyHλ}:]^WvŌ^{1#wvBӅʔ1rnDQl-+Ԭ pE?CRsBdO>C{H9i,3&XN9<1B·g 2qC@tȸTNK nI% r%J"e~R׃)dm *2".errm506mg3FD$'`p hQl/M!66nA*yGgNKF6vy8i:;l4ư`&,a *CzQ|GǭF3r%N NbUHa)jwݿ{u 0i Hqj)NRp_q# QLUG7!={ߋ4d:1h&x9 GyY""d'=Q ֭7x1D=1R?P:dx%+ T@g#GNK&!' PC'@-L4,U\ L.@]24@}PGp%`9V-9.](VK[ GzM#.0FS 9v՞{yrSO4NjJ |/񒐒2DM47! O`($*; U4Omc11HG q,Te+|ԕ# ɶk4rT#cEJ,Q5&lR/^ `-QM`QƮ:O~R|:q{Ͳ_x{㏢ ` klh Q8q9*cXZx[N+;PBzf EKA+b<^P2${۠R RM'ũzIM7>$` i|FFy*{N4~>W Æ^wWHp.cyN+;v/Kdr,(גTiׁ3z"Ni75!b>UO8lfxZT'ưTO??ۏpe'o2ܟܗn͙KKK!uBcjIƳx Bh)^'Ѧ I|+湈[_g/]9磺>jvOs_nPv9';]-s rpG鶪AZ- A Dc} B<`Lm#UɅeoXN=,m}|տUO=C  Gbpyxw .2A,0ܗH‹oc +ol'Ӕ0GeS44l*Jd`2+f. D"x澑6gtd!iP`Ӂ%zS,{Opnyw9! JW &[˔E2xvd#DvJ%yv ɤ}x|VO+mx|' d_41h4;2@g|u. .ݲ$=2"#/S?@(\r^( lЎ}> ,)!| P+%[0ȹEƽm[x1#b/!X D=Vq :Z>'fUjU3tuX0ֿt |1pR \1pTzTC G13uqcp.@hor-f?td lEǺ Z-S/Yx$nՁ]΍4x .u<$z_2dYײ ȑ1@8)M< D1p(L [6S6ʡȝhQx 篞ق#Mw1%"E!lrpyc`P1x/ h([dʾ9@j(V_W";_U5Fjwt?庄)RV{Qv;t^ 'кd" 1)ry1p(J8ybz.]ـfl$!Q3s-=VF/8z`i > [MֽD+h Yw|c`^xP]V|MaDʕDѥk ǣ"ˮ&A\ =T/"gcGV88ooo}x '|8Ӭ걝sVt¡gJ)1YO˻XktPc= NS!kc a6Ǐ)G:pwk[ûMS11kdeT2bc/rJ(Y,BΫ|>%4Jfuxy3ōnJ^ Oj %)G=6aFb?|>-̂=*Ԓ<@c`<@bV#t[񺟾0KʅamH]f Z :!F-씰 I7 ߽nߏ~^K Fq}E0uX! D$+}VK!:(?bQH'/ 9/ Ê:@cX0$ Cgv^zX8(΁/r~өOb,awLڣO 'a+P .n 0d8{ L^@$C^<#X`Lߘi/i6S+nm*+S6XwZX n mBƅ=H96 9q6%u!02qj-N|X_;}ɥ쳔z<@c`>~HbXNm+6g 16TLQ,aJ(4Nv~^ u큅 "d`bQs%5$j{ytpX* {>;m+$(}EÅzqCiqB$y%|$%<-4'(}t8J;Qa+t1u$٧mQ3eVLU<mabl"^uOƁR<*@P]aNmJ#ׅ4NMuOHvu^2K Z,Ri?SL׶4NM}1W0 :Oh F|Q?9Da`وOPHEP_{~Q-ңlW&4 QKG LIS2pY,>xy8NOr6ZDx2]ʹzqhɁEnFP 98=[3 D(L1 pnuΠ66j-_b_T,5Tk*aKT "qNS+T) iX(ba,ѩLԝy|ݝn\fJWx?몙gT4Oa* ]U-u^m9m?*;&5}f|L)4, E=/DlkQݵUuS8>_umolke$: ]Wղ\]a".'J0zutqS=Ũt wʖ7K%`[9*9Ds?Y?X-\b,C*NypfG:»X_Y^/Mۄ (ό"I$J 5i-Maœsq95)q*%# swW㾁 LOۤ sph}j^/;M}T+.Hl enLVomlnlwLR'֘L"seUqJ19ŹBj]<6SU&ұ'×91~~ }֥6J;[w2}fqR̤qBsTXibk2DG=gi~`kv*$du^Y4V~_K+!x,6g]4vMDh0y^6`[QgRgo;Nmxg)5áߛƳZzž3c75@)*ev$ePF;.|:߽>BP;^.uj8GWStϰ`IK(Ey|^3+Y O ekP cP{ ! r,kk<|㛝!dߴ1Wx3Tod;J:=oYmOlO\=ŅlgmloAdMf5=᣻g{!`Ad ҇إe7‚P[k&'wKPmNjr>aT9܃d*s,U*o _x8ߧϾr8!E>_Nt|--[%]չLij/L$2|c+uNh\ygՍ7'Ӏ\EZMnF' WU'ѳO#i9m'ݴlՕĄEO8z$*U̦sǘĈEO8*.Ā6>o ;ɑ ypt#*R\Ʊ*?3V3Otwڶn$U[ϓΕ-Dǫ:|T_|Z{]ks7VFhC} ]7;Ӯ\]F3Hyz=ZK2:.wifWQjw`jv>i2:pW2YbP 0 Lslc͵4]2TyUKdh敶4:Ҋq0-)g`niBT-`^03o 4q# R)4Zc<B0/I\S.1Bغ$:9V&hG !W&4[ک|1+!rm >L9)#d Ϙ ܥ69K,FY+#`3zf+JҌrB` xd(FS2{eJ)ERMR#΍`u06PFčvwFѝ;Q&`;̚ W'̥?{֍/JCrx ݴ-M_,h%eMd˶se8ѐC ֏a+ۛ~Q$19'-őJu;ѻp`0 =$q2`GPN8@3gH") e!;@XoAyUS%ڐFdOĪFiѫOsNHpl#]w6NV!/2IJ ʐhnxH2yA>>Md'9DO?s-MCiKGE슴m|x}mEY>$]"U~wi%yv$t]toQkXL_} W]'9ʼ\idW?vA,|=7qKg3'Y}XtWc"Qxqy.wg_j ΖnodW3bg3nǼMayKl٨A5zWm|g]UE-:RFϺ,Eg!dsˎQ'Dk[;-O?߾o_?}o߾a߿˛o־?/ϊĽ:xU޺iiՐ]KW>]-Y-Qxv $ɏ=K/bsi6Zoj4IPIO.*D׫Fw%dy4 +:%(-7z:)W^'!NFxmQ^h_d?Fʚ|nOgm.XD]4Uuֵw|gE73(`ZD[ c%V,]`͵3dҊqg"_)f2qO烽[[˕5kaݥ}gV Cll}({UVQhEeж7?@RMjq4$7?`wRDZ|$ctv>Z6w|Y ^+^IHVXېy,L$f3Phem<$  9dF)E(`r:Za"e^*,ĵr!({@7ϱ18`{7IL-+)8D {2Ph¼ir$Y=%+r:fc.Ѐym=JiBcւ 1f>!kLЂyP1O3ɥe%9PΓ3^Bp(`^͙RA[G U@r1ucB g>RKduD ErkdB ЂyUsg)xz Y #rt~&cCE(&) |f5f.MB.I::U.knYGi/QAzbP:!mb$u/,'f m&0Biμ1dWˊ8 u0Tp [CSLlsF0鵈 pbBT1«z[ˍ7 aSƞ=!Z0oQ{K׊N2Rd€7}ЂyuUN`Z_Lj YUW{Phlny4o>S3[%O=:?B %o ]R"AD LDa~l [hGAP3\&BЂy@t 2g!u 1jc~m<Z,3Z^h*}RC(4`97g>".r$pZy/a%hM ^GPhrւ演Hh#!?fQNmCrɦ?DfRh+ C$J)#/AL8C*qmNm _tխbjz  ȵ%p׳?nA.$!bfy=dA 4Rӿ2 Y #\ 2XPD4Y80%h4ǐ3>h "J=dT RCf`}@!VhSuţ7hBbyVCR)_ސQtN)b"i]R;S*JBC2d]&G>_" O R~1ɠWiFȩH#A "ȃrL>#-i4P|NA2(I$5!mu$g E=_`{!D|YAĵ<=>1P88PD m~_ Un;ZΎゝO#,vREA{Û" 8 aZ 8#|PpmHj- [p U)) Or8p)А/4zKW1Q(F('\ ~rpvCXj}r))G~s#lUj yyչ[q9PەbSFS񹜽kǷ돾#zZ9pH#i^{y]d;^ϘŠ1t(8}XeP +EJmzmXznZt6G1y4Z[LJyYTʤӥJ,2Y8hY#KbwwI׾0Ψߗ*t]{TWö#{$gsS|,)c?Gz #z+4+4+4FHrKcJ#+F5W+FH (lTr\3$KTaV%Xa7 =^K\NT)㧃vتGbrwE(`x`d[;܏ vglN1T)61y¨@g 3pγ?.pAZE =*!J3pd.୷AJ̚):S'$ǓډT;7wco"7]~pe_<[X{r-4Վ7QQ?ur$-$'9'u/EP2HIޅT/c0zI8}0УsI YX^, ދ]+R(jAjZW:Y`H#'EZb'ѹc'P$GyV{D}Ч걕uKL҂2$L^d0^)]v׋~5\?/>(kiH"{5dѲU2 pEAmҹ8MsBY4b%}@lTI_`\ۂOd.ϳnF\$\Jpq? F.4}='"~;8;| e5Fħ^|Oߜ~&>=>ǷoN1Q'ߟ= Ļ[ᗭ'5oko5UI׺YW ޠ_NۜrMwz>Bl-gɇ/uoBf0R Op:ͥ&hŽgI6 U-(FspWYB`̀@]MxoHۄ*J34|n@#FH @_2C$v$WG RRJor) &_ud7~: 4 ߘ -]4b)So򫤛l@{dA'w9MHQ!Y7֤WHAx64^6:ކ;vWWXa|ؤOZ;\W)a >V{3ɶ:GskR< cN<۰FU 6۰[%Z&y["8 .h{U23lI0G|l2"O2$3>8\/sʹ#).HKJ`8ݣ`:S$g ߃i/ՉRDO+'\Bg5;5;+ |f  P NJ=Ul_ɿڢZ_Nl_WiOs)*MjIQaHZ`0t ;~iJGY=L[ZJ9i{<1Yhz&ڢuIh3oW~ocE{e!tRKTΑ >yŗ[69G;}k2u!`+$HDlk nݓbm-P8A_2tm*Cq#e:r!_ܵx&&4іZjJLhA|Wt5JOF^`\2ssЇŘQْk*u髁?w"bb$F 崋KFIjMv;<$:!Kdˎ\!2W鞋y_á)%pƦuЍ /]oJ&>ؖNL+Hkݩfy2v"t>h\fTGfUPE$Qxd{o8pjh"R7 L~*?*,XVO~.6p˙TӧIO_2> $'?T nܵ ;[3Nn͆Z8sCo_^҃I52:rܴltgKv6dF >a’udҥ 1w5B5 ɷPART^i i5d7O ~} !)5gOP#*&1lxpu4o:Úm}q&4p0&@-Dc*Wij6c1,%^G(:c%^s@ :d\B33sܔuNE@4>X03,WqX‰ XI ( Ͼ_]J+amBT2QRf7f43ciqS$B{M +1V&^?䞋XAB PT䆈Qv1(Gbyi&r3Tv8RyGQG Cr.p:m ՚PtFgVes5L9U[U*->u(]䊁eVZ$gKܴ<" 'I+5R8 qUHV5LK{wgxQ<{W= JN'Yk\5ROH0ir>CɠtcШ.4*&>@||z7>=?~sz:?1Ho`&e镺!P~j m1;]x]󶺆]SŚtۜu ){0-ڞQz|r^.9\ +\j.h7,ɦ_a1eHwUŨ~XzN,OClEm^iD k>gEkH>Z#$/!;.&"C)(J;6KSTZ8Ly^Zաr TUlA5sKm֭Ȟ=ߩmrcGP8G $FF)s#@X1auILrϑ-iű3[PcCk4q؀w<%<$7tNmީ-xΗFXF^ڀIX*0'IpFcx1ZBTyEk3 Kc5o!1GzI3BrS UP%r! wBz\$#ċxƞ\D~{8ko @[A'sT{B?H@R?Oڸ@@ឱ[+JKat2ȵ*9IQ!Md'Ga/~kR4q =m,{ڰgzo~mNs.~ȃEBAXBMqع֜Y :X҈XQILj\$xW )hV鋆g~=wZG,*gm͔:QGR2J^-ڣJSPoF=z3?hV(л@h˼qfd3zcP3[`űh+.kxq78>ޟ,iE&*fRNh//i]5PPy \ 2zy|twZ/!۹[-ܾ]8{v||}~n3i'ipϲ!!-xg#׸;nV^7؁]J`"f2!J_Sqvæ2!F;Fw,A.vz;|dPlnfz0 wᬻwHҍg2l}c쯲uWɰ(=;M(8_~䟃g~w\! #'+&|Խ^`.';$.(l'[tv=+~O/w/̑sa&X0G.:uQo}}D?Z57 Y8O (:;lfpUPUNOPwypٷ jGdQCoN'Nz~X 7Csۘr09~)ACx{q[^C~7@k$W“HGtCxs 'o;wPmVȕ0N\7ۡ?Mbm9&ЍiJXvui ܎1%ݹ=0I7' OO'_wY'զEPy\:ƬN:Md ~erUOST_?SB]΅M;{O1ל{+>z_9AL2_+2 ܊xm߯r#jsn6SL{lߛ}WQLFӏsݝ{>z,N8\ J8h+,]RLNʰߎ[/M/LlB(fBuLky=햋\B>=pXgQ\LlHZF ^}}ʻnoa} /c\{Պ\ _9 MQ~v)w7aOH>?ؼ* *OFfe~#㼽zh?qP? ~A!NeuV9W.\\ӳ\;i9~V?K"]yO\Tϟ {$!ρ`Mȏnp7w_/^\W/d'ߝn{O켃)nU#p䎸f (ߟ,.O?l1ĉl}+?\l^~dƄSB0X=>[ދ7+ݖ"oEr %ћv9j'ǃ8ӕqMTb 0XP8~@ILu+|ݚ:f$ B4iA eL :W/r]NCD~>!]u> c[bCFzC[hъFv|B>!}O)Dw\Isa]Agxï lwh^]Vp#ޢרS>9eD񱍕7Oǹ2qAۉ>|yrbޡ{1z]zKUBZPm4"r2£4<] $+;GՆ>`* +fP'-?R)Kφ[q<_`D <1y5٫j[Y}RdQDRqʧFl-׭r^wEB[4Lhm1feلtc>C&k!1H;Uq&2SZkK Cc.:KX%sbxTV.@u46 3j-I\G&圴#T)G!qάʢLJpE= I\}&dX*fP̪5G0:Ť+C`fk;ǣ9Yy%>ևgs% g > xb+BPA37 `"cRa"!1b*XhD!lMڠN /+9ƽ^ w0aϬ3̫[\T>2sօE#p ^>m\'S$:6)1p5ST%nYUC Z9pxj*c\<xrV}ׂ{7%&uox(RWN8:@k!2`AjbaZR Nea'n4/&teOC%Xe0' ۅ FkĒP!jUW54 lc=':&oC@E@L1nF/~ )X_N$Jlƚv0]n0IgYfQ5*)8Y Dy(m*gUGo^"ƂZz7i[@ 3.y͸tSP1z0sɚRqc-AsFYeؚu{۟MLk)nj!@3 L6x*:St3MKQ#b1j]0! z碦✺,8f;f c\O`i_o>)5{8 KdM:39^PnxDh&?(4(R(.2XA(< UX@%'' \ XoڬGŰ&46I2'Í [W*ϕ+E3Z)ķHu63 v*DzB!E 5RŢ4(*F|Rz@U4hep?{DmX)+gɈSZ]FʄȍL35zQcM |,&˃ >T Rp4. p-c||fރr5Wἠ[z5R3QbVq`Qi12 ' 0`YɌB3 _ԀBX=bN F:@>ZJx>i)0k2eAPb.⌕Tp+@應 Jqc!6l!& UK'`ebBvf%$IBE!2CJ,9uR) Z` QBU~EOW$<X B #U/z ռ.^ōVo]RJ͌n-΀ zQwol%+reR3DK36V8tBXo߿M-4n~9.v^<" ?kc=tl7רt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@M g=&6Gd<jPu@TȦu@3t@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mt@Mꀐ`G`%k:P通通通通通通通通通通通通通kmVE}mE0 6AX֨%x[lڶd-[!X,uXy@y@y@y@y@y@y@y@y@Vy@iQx@ n{ZϞPz< L3(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2(2( i`=y9 CݯjZ]ׇfh4h W+ #і`ZC[ ЖZΝP6t6ӖN[k_ Z\mXRNk6CKif();/s70WtRtTDoę1&vi ) ~>_yMgX&zg^`\`Aꊲ+X5m-(A:b{2W,Fʹ#4)-tC5J#_h Qt>niuT. ෇nΠq!PQ/A՘''| .k1Oh:wcPJ4tUQkK'ZTNh/J"zsQ 0S1W .Wm1W nJEzJgG!@;ie +:[@9TL'=f:խ*-FtBu6oLKm^=@g>+EǸ.>qIh#03uv`l!9UR"` .,$%`ʠ\̠|񚸝TVaUiE%uDhQh#ZaL\ ҦF|B)Ҹ-N!=l#TEP4>Of$MB?e9|:̠[6,qйŤ>LV_ԗwv.R>^N*nШ {UfYRBO_CR(.-REKtdJڄUjE9Ș3~2^oڨ g? IוdlKo• T>t @SQ_z :qXvJ'|xy|WZR/om{E_:vԢLb>N/ީc.pK3lD}]k{u_Q0^׭vDζBsp| F0?4tvcqUWr70z>=e9ڋf(uxlNLu{=3/Fe٫r׿րpF+ oc xa( =[=* M[$#4VQo7uоfE7O܊]+MlPh5o^8&j (Gyiiԩma>-rd7-m/2]*] 6 oɁ]?\XHA-SVSh/-S+bw.yQ|l*0}4,mH^qm9'cHI7e8gin[[?ٮY5i^-؇M5#t Duw j+N Ǐ{SMnwd !0ȡݢ3wze"cdc1qG"tYAvő.CCQ{0r!Ib #52(B)&8 ~IJ2X}!޸oyÙ^@7U2'hJi;բ~"5->~3:DMߕm[k郟z^=력~M[Sf 6FFU(ŧG?L n°,zv ERʹSF Ҙ(|0!ױu+alB O1jF06X/5^# /nn_ڳ;Zkn1|sm1nT^-[)l1Gxia!LP"F.MqGi+^m{0-Uݻ#ϝ^B/` ߴ>* nVCD87 Ϸq?J%1{G'H;z ˥wï>J#gk(ZXJL!0"0QvN7@V(e I ^ Yc֎J5{ud =!"}!X[%n9 Q Ƒ=;G{s4AU8E<(R`j6ʻ~6̍$jyO_=OS|_>00d0SF;WD=ɕbyT(0ǭAM0˃w/w꽽ʹ5r7_ya7c-pxV9Kۼ9"\a۲?a%di _@GRt?궿78WzxHLFB.1y 6ģ*Z# }Dx4Oo|pD?)U:ʔ:e7C{NZr[Q˜ۭq-Q]Ɯ4&{ahU #FvUZq=mOY߿[wmITvYt4k2#iʎ;z`65bpѓ[#Na}u}* K ơsDʪbE3y,¯!ujX*8CtjvgGYHX3q:mkYs5b]ZfUC5 :8fsI58i ;ۮ!nVF7o;W|6;uDT:\X6U_w6i}lPpA\(w>1xszƌƁZ0c)RdbNVF2\pU^0,,|Z[V Quh!qX#:w}KgSu|J:Nᷡ4qc*Odk_~7LZzqK@ˏ"+81&<|0^هL~^u ysn'Bۚk9T #:_zzA:G +x0yM}^miEmY9E{pZ){+صvpzi*C^ϫhːхN o$:_R4m`TE|OiT},99ߞ?yqt9RlkzqM]7qY }7vLak=F*NVjA=Y.&-Y-Aܳ'o0`vtVfts3N9- -N5(O+LS]N.ɢǾ$,A$ Reb磶>ZE51sm2<`'=;sL(QʣȹM7Ȱ, XgZ)"Vq >6>&"(\RY6{Jh< XWPc -F1[h5+N쉒(3OH-b3EA:"qȿ#LlUdRFdUYN!kL h Ӝ o z'гnjxB,/Q0M-3<(̵%e T@n`10bXJX;+Qg;[yt;XUv|"|aQiXi%S`H&ܦqOo>PL~SuԟJ&= +E 7$`iqS$BMƘ?t썧D0!<7DtD1eьxG.P/^06'ii9E QGm#˿".v݇c=` 303FN[kYHjWJ-JTѦ"UիW~#aHBӖ!P H{ -fXr̋eJ|SIjمe <|aj!m.YǒE]Fs7]a#l`ָt&ƅ|l0??WM.')9x?O1 <ޚ2_cX2SI\ߢwRP6Ih_'S`@gg@0`[!fph 黛Y˦DEa8Mc,.{7 sKݓkd%oPO>~] j*t0vނ)oZP";+GE"Eivx=\kuel0LQAT/q[2 Yh/ޛ0|gj&.*gJq>fX5 F ^9MXAa8y`0a1ѓi*g%hEVZV zZ:I0?#ibإ&b/Yğk{| DnTM?,\=wՋ_ooozsy?/n޼~wis]A0t?15ojjo>5UԺU^c^N\rżz\ݚ J~|_'E)E9$Į_,b CuKp}B¯7"+UGZ&K%fh$hF} q)6 $JAQ]:EqÜeGAC#(AZ#q#\F䢉. NZ'{$tp*:N#IH3564F+TPvv~sάj{3)y׊,kjla *9BZ^HE8 rUt&Aa}n(ɢq!61Dzvm^r" )LK21':,CX$G>[㊪|xh9 J^FcD2D&R/5eDDL ` Xy$RDZOPC&X܎f_ijMYose#h??#q1_-Xhm 4wF-nQ9b0XJB!&H|#pBpxv~IfWf>V!:RRv ZXؘRlmJW*)]@|{:J`B<` S띱Vc&ye4zl"HhnD4@hti8̓$*rJQ,PG]}^cEu=fgW펡g7qG|L[﷎ٳ@n~oܭCQ?5qN$9x;/ZltΎj1}+|fZm7݃; vz%T7ݶ\sm^Ѳ sѤNUoIZ|WYmV9^|4F4 JN?9WZCaG.n >γ  \1dYc֕QshijVS-C?9C7eMi‹m1JŪYWOs)[@Y[Aݺh:O8ݵ|ZB~׷z3_6[fViv{߄n{;;L{$iӝ Ԣ脡(R 7YIUz,B{S\ uIlNP^rXbLg܈h\Re. ~1hM aK%թ; oK&y{~,^ zg䭪-EIJEF58a/ŊPW] 67w1]> Xa ϣɇ_q8 \~Là]hR.I`0!di x@^5J΢ų% A:kɋ³M%^g8~dZ yٲ-yqY庞~.>ʿ,/+:{K*7jWpn<{?+X]{>S]+\}}j}ݍގG0"տz-pӪ_yxF()kφOVN\x^̏1DzvhYE=eN)bxЄ5 !ݛwm?&tm6;ܯY~-fBJ^MrE*Mj:o)jZUCGHu-l&ZEFy=Wmvz밢N:>NOR~䳱50#~{=|-3vF,B0#Ȁ%be}0z)#"b1h#2&"Xw%/=m$w ;>7=rBh.}c$4*kc&PfTetۨQ1p'C2MFs>+4.櫛6-3UdK=X,#TiBN!~x }0EFu6ƒ EiAްLw!QLT[k:ȩYՋgލ90Ka Lf1"B0Fy€ tHX 6sL(QʣȹQjEw9lVJHH!R918ũu.A*];Żkrx,>X_n#ƽ%m]x6ʼnty<{~ߌtDYo1YowsE| vωDP)ETa\jv-e[$x?hŠ#v^arbم?xJCMijU! kf&:ͬs=l}{#t={~ίz9~du\Mp)زӁO+Jw Dobގ,v469wC w4zϽ0E1e}jNJeIjO巽*Cb<Ϲ]z  \Of8Qnd|xoR@M9 p6G(6;KNC^#+Ìbƙ)N0E1Vq9',=MiU~ckK]ЉK@K.:@kPs+Isy4Em>j 4FZKvTN01Ը֝ǣl'g%UC$[P=ɬ0&^Z`XI56UQ]զxr=<1g>HE0ARF$2b ؁hL@POFE !Ie #5<\l>&E Nܰ߷1pvnp7I4_K 5{t I⪊=m '?EQnby\涳"c0;gFm1]TK)NQ"1pN1"d=+HXGSw)L@L®1je2*zqѳJcԌ/t'|N Q%"Fq`j,-+Pm\RQ"!:tIa&/Q(|Yh:'8. åh2XbS5x͔)$\QMKc5y 0ס9HX$7ZUB+-0V k< `xNTTSw)ͤ䮕9OdVD'b %`)ATL~)1 ZF=f0AI3 b&ewj'AWpO/hEǸ]N S,`3,:2taf%a]n1Ft r 3ɵ r,zƌƁZpcrDe!,8ZI}W d&aZ}'|ta/$Tg+ϥ0 m墜5nN3=fkl]ɛ1A{u~n{_gW涟wg(Ӱr "Cjzh<~0ԯ0U B0 Mۻw$OM܋ȦY%WOV?(`#!]|~~ EؑVRe͉Ednkd632E8 /͂3g݁H"9=5=nq.`VJ@R{TjPviS]40m1D.h:mjMDU^pn1rc^,Uz z΍ 깡 vb.M";8ͥt#r&*%%0 I?5C( ڸ*W+ٛ {u <:(bd|v" )ktr!3+r2: }]N gEd5Nրi',IփEsp@HϽnydZ-;ԄL:CR\JL}>x&:;;\]6 NRP_-M! /(1 o`-$/L:NS+4?O׌u/Ջ7q6[se>߻qK"`7ʳz~is9ꦙ ɵ31~MӐh7Nd > J*#l8!zw՛ܱ8Y {"7ͺiAj>8*LF/ÑKCߌA"݈\ k_-=j"47_w} |S/߿{L= uT=O6@AQ0nM?cj0lj.S6W=Er7{K|֋VڞﭕQ7+FSOINŮ ^2~`_)qCTPe*ܕTa{ġwnj %-"0Ј/!;.&Ҁ"C)dp|PL畕|}{~iʸpIR&rJ6A X1:ݷILbHG R;8varsHzˆxb|t恿@c;FŢ%t&N*=s1t\WaR~D`RGTRJmɌ6>im"ɜ`M'YfFI:t1T)]4a#A2sVk/;^u Xesv l`oZIc?zrUCmrȚ6[Z!5A,.V>DTk_x^L&)%YɺAj(?+&TMRP} &) *E2%ؖqSz$VZPO3U飚Տi]FMC =S#pR|Y}4r4eu@LF)xn9EvA=jJ=֪t1v=Vw#ۆ9FѨNZ͠7i[V>j=Mx1ͧo ́z6X@Ncc{+[bأͳu5T"-ĵ}!pkc{*.F?jS^ʦ zQJaz󑏙ݑtp{V}F5y6Lp u>jHWiF+s, ~TxR 2QӄKʢL|GҜx#U|ہK1kF_?6Gh*$TɂĔ#"oAsc+[qu2ie7+x@V(e I > Yc֎Jb,,YydcAhp8%b"XZYh68O@'O؃6T] v*;Zyru[_l|ۊrܞ0)AX2Lt"BFQ4H Q`k~ ;qTwپɖ Z y˼d.E3os8™U2C]̀~+yV7V_|^*( SpRaQb.0>C.ƯC5,N($+@IҜp8'K2[c"Zme>" c/ G>A>%ԐW<޿47 N9\ G_łR~= l|QxZo^(hU#hgjq?tdѪ vJsV8W'gg2HY[^iXY~TyyUԃM߼&j o&7N`6Rn 9FVUXN f45 {</:h|_VjKg?~kf@epTp)% iDN")f;o-7Ze#AhP+\X@qy&HyD,<(+wO^{b(>nPu/ȃ$ Jt.O'^v<ʶj6CRTF ר]QQ;op&,$S{+%R >޸5oa'.?&kB%/G)n@╏S\-&x#32x5sf"Ha+lYvƓ)_ؒue&f'YT:*@G J^B:`^ek) Ӂi]IՆX9f=woH"88f`f r6.XG, iCn]d2 ɘV!!3uS/IN Ij$fXڐ YҲeIRҎZ:?tXujvj<Z|aV_W!e;8xshaB(ՂwW|ttAɕzhWr~1m%bR%c6(SI3ŃdΝ9lXJqo+j`(s `J"XAwVl7V7(ٴ*2Sត롛U:g<߯l ߸9f@ұebsc̱r2a$i/9hQ[FR$򥖜N)&l$}Vo۳,[۳fFC9_ApYH@0ߤ">#a g7ў}ݐ7ZCB a:ކrNPMeUj&@lZ"71xۚ>wG~;c;.ᝑl"+x-[W3K/$b 6XvA<&݌v!>Le+ڈr0u $Ӳ䘪{QA Fe}Ќ fyQ4m7jkӟ*HLkW2!x9eʫ8ƀϕ@rC+=ݖak%r ?= aR/@+LjLA%be}0z)#"b1h#2&"ZVз#ݤ?>>Mj+͊٭r8Ks3iwg89=NcF1ZleF͝QFGR :J:}4N wGˀ7{ڕכu)&y(tB !E{ 4}yԐ~=&utǁHpo^8h5y@- gi 2(@f)lp&_g~:}ʠ~vlȾ^&]Oq5.M0*Y/[(RdR%%o Tt{L)_~t"Y`*\éj#"cj8`bS#yC(OipGa*"6YבaYqPQ[R.k%CFr~ _BO}XKsSRw4PXmF{7|d>БNU릏O+IǫZ0upKaR!JI8^JʦNU~ JrI?Qp8dg=,[JָL@>}4h`ᶹFɶ7-tz2ݘ򩛗rwK&uNLbW/J j7ݚv]hIk\?ez_$mSGpbP}ɥ/brNrsFr\"X`:ݹvy}>cn^5TC =\J44c˭O.) LCer$Kp3"54>;r%J7bi1g;JaظVdU@7G[вKím^a0lhVAE >mXRJ|'#%sdf̂y8\9Yg W,{P[1m`]w6n^[jfsnp]9m$ӏnѨr+WYӭY1͡_tsTҴAZ.m-LO x}4w@$wMw+1 E1iZ-U=EvUMԸ"m}厉vذgSY' ڰNPkFEӻ7]z #Нtqw GW;wy :a)ZCƋ˻W[/X#9&#ДO7cOID=7Ri4eM0h4F94hL e cOHFr2x~kqKA<~oLrdPI>N:Pf[nw&X-ޙZk:7R)))_!$Zvo88J,Gu˥R!RHt!#T@lub P7&zskuT(NiJ0ZZsЬ QG!"tʛ%G4U[k4O֕atY|_rJP1nzp7.P-0Rdz=?Q(;!hm- ' kʃuħ*>\~M՟-&xpJ&f2? } {KDyIYJ{:kټكd[U,<9s^mmx*WBohkXg R{ã+Lg`%Ko@䵤ᙺegf#3E2oHϻ)sۻ;\p%7/^KO^EKw^/4ϛ]3t*k=#8EOY<#\ 97##Վ)ASP,'*"5R&WQȈEj$OuB2QKS!*i0[W;EZUt;DF(ǒT- qMl›Tp祉&ysULʫn]Mׇj-g\.GĊC+64\NL0-I4́yMY'.?rf\s92_lyFЫD|.95Ub rr0N =Њsîv4ǻз_8V0t];D\C8ϐ!dyY`0S>ιEԽ85d%o5hoG5Jj?ap󯚚 .>ZqKT#9Bizwޞӌkpe b|Ṉ~- .%P?NWn0gn/QWW$ʑzUðX%W,,Q# Bf.|8=^;M5d娌\?lF]5Wq:[0ӑޡ5*cq`kOG8M߇ɽCN}/ۏK޻@@95gyGxGhN{e{9c*Ӛn1$,B4Doy6wW5GAXn:^պ b lmI3f )ōtˌzoph@zabDلe0D˺(^ m9ΪںꌉzB>  [[ἶY wCi :-/xk"&DHFE+*0/`݀!gmF2BNpb[G#4zrkZQae`e,&fmwh#36Rk.$qq P:(PD 0Ku" 05NӭvX#ssmh*x8_h`ݥ:fGnƼ : ĥ@My0ÔN1!Qϓ! QQzoAw't6Foys2jc{.¿-c7R.!?/p>VpQOKWKm:XjÐ=FI{kD ȚwlEĞ{BrNŷ>oc62bP~s ?~  kg-85Ģ+M&Z-n%niEZ2~=BejI+,^} 9+3)>_/O~An[a4jĽGI⻞[}nx^?G(,7kQl K 4(ҚqɍTy}.]el\;pT#Ho;5err-*#PML2-&cEѤ$R)\B-DמPkM:";9Y qǫP~O hl7< 0.2½geJx%f6"161aAX$\(3Ұ$vIRlQ"0{1HzB.X{F:f %Di*HMJS`)+> L2T|n^:"iY4hZѴԴְ{P؊_[*AUlϠ^\76RLRqI"S Ad#y))+;dqK uEB+0FFpLdƨ$ C(Cd'G9g #V}[c#s+u =ϛjN2㊃`2q=g?{6Õ{nsX$dM833u)H t~05x -Ĵ˝|˝w[7~='86#X?(G?blFߟx/ͨK=LP/ KC@cYޝ#g'qvZj۩34`A^h.i\0⵫4tzylm_9[E79.mZżwGͷ_6 }M:`fWYfv2nPzCն0N /8pBiΆ9qvhgaO(R*AԓI?B\2y!AyXSN(Nżd{Gm[X>]`Lז_ب%:N r=Reɝ W3hE9[frؖ6SS_k3+\ka&m+5*նLeO\j.˧]{tO7Ε."M܋} OX?_̗0JJ)ka()QVʖ/2/k7qxGP**gH QM +JIFqδ LpZ8ͨl32F#S7"NщSK'#Q8#21BؐY2U0CU20HF[(Kc$9<$?($o߈rSǜRg.3z?]`7:ir*dM jͤ1w-y"z.9M))k,I(j#iij|+袚4j)KNj)KjM-"Hi}d6% PCb@U *pRyi3)ijr:/GHhƈS!@LNɸRIh&荘fm.$>yo?l΀O u3pw ܽS-ܑX @V\*4sr¥Q1J@>q F˄+Lb$@Ph-]׽eژtwI 7X߄Zj:uZ.hj+ *$gBÌ-hHы>7p^<nY }Yqi-ޤt3YZ:F^v)qуUD6@~(N- (l<|JP674RRY b钷|xoL~oAˆQEz -)q>37ӋΜK<[r޽|nTFS,8ZI}A-ǽO|5{w㡝p+>O i <[(oD9m;DcliIf' `0EF3fXVtvuЗ΂C_, f2 1Kq }jc4vk&tHX5zC9DŽRn<۔LGEd?rbrX+H!M'hǫPn=m-lOF*jCPNَ`-Rd+r6vz"QIiB-B.ȳ_=A\u0N+5R˫aj T 6 :mDeLy\rs}9jhyql0:?Oգ ?y< Zo{QMr+ { ,: vu:\e?R2mV+'vV=Ʉ}m_6 ZM'7,YydcOPr!k 5 JnHb΂wDðq ckt$?s9Xjf-$ AiVvbPۚ o*gڤe;\"`vy8|YwZ#WN(^X(VayyqcaMw&#j-iܷ i ݶ5.tYiPMF2LUM#x-ŠkH߫ftgZQn+(*[jɚ6o *+2tfZ{MYcPo&eȤ`UNܯڎ&ffht諜j&I:ʑIw?mpҀm 㞯aMX-wԋCq-r{#<]$]V99PF9O$Uٟ %1l˨14c(f3'@ Bhж9ϧcZ̸~v>D)#28/C;& )!$h *!u![6y#ebdUYk^B6߭BkjAaM-)JTpg1,%@Xwq>AsHtM#Q* J3)| Ay@\ n[y,rEyO١uIOJQ 7$`iqS$BMOJcLto_S PT䆈(Z,fLƓy(23\^D`*v,|;AjNuzŨ#L[! + ՚P!0bF+Ǽx8] N8_| tB'')G{L&$@`'X2SPDNɹbNdd2E$ tVU@05!rrhF 黳.;lv>2> ÷Rwz{&_HYPUDߟNOe N.]'nZe PÛ7W\gP*&TՓu%o像l 1LErb]bn~9Cכi&w@Vz⢶'ƗtU F ^<|Aa8(4i' DO_+할սVn 5uB%#cإo&~^O1hTQ<񏢱(_糟?dz秳|:>v LKiH|g؉v I-Mu U]S6Z79r7&\aX?7WhnDޯ7w?OIc$C]~]id_Q^$ꯏ\?0C<مNq5YG61VHhb3g@+JШ h@#FH @_2C$v\JM"C)(J28Y(šZ\|Ђ:+;P8G $FF)E]N޵N6I hA*Fv. mþ564 aɉmV3v9yO=Kx;L8%}s>),;Z\4FBa &uDfhic1م|UMiOpฎ!Xa1F: lp6HFjRz\TkN1V.i3YNk:m :[rtnO;dց 5gfŧůV~lNEg )20u S]אg+_ y{^l2eZ30{-#cXpb=p:ϏjJf|~},pR \VyX{d0dծi܉7ozt=,e,ҵ4",S.6 ȾT ~\L8oud*F Y|c7jzPCUs5Wx4Z^U>37Ӌ2&Ljr1s\W$zHF7"(3",7XRXH:8yEh4BkM̨3hAɃ% 5 T!Ӱ7Fz ^^3XC6+;Zzs{{u^4i %f7= BeLy)nqKqހc`bҙVgpbYT`*'u ,&&VQR1qV{t3Т NVFYqMxq_~OzI=B_wID5dl-g͟lwP0P# ԛ&qC~ȈR4cK2 ,?9dTyd[,0Ka Lh,d%OQ:=V+M01ੑ<`'jԶF9DŽRn<[֑aYqGkZ ?D h$ #gm*&3[ib6ڼoGhZL]7`AjY̳bIGFc@MĖz^R~'Mã_h|b6.ֿ8F$/8nԪ=OneٍJ=w:LA.r@Yg0|_[ g?_z޽/0ԻJrK8)r@",{=87Iw?Hn)q ݿ J54?'m4% &-}0ųv dXWSfL4IVU$EIJl,[bw;R~?-XfVIl9S`1t${/eJ,. :А.p qb7`p$Db( %4B*z ,W JFIDc_ŀń`/覣,Yydc Ѓ%RR(Ŝ(aHt|G 9i.N0aN0]CXd"-]3DX|nx '!X$,uL \H`*S I`6պ6V k< `xOhhl\rw>hYWרF_+A;pO@{hvW02IQN)93Z`|<*1ٕDٷ|߫[ٴJu ԗnsݧ~/ Nړ@iU6.V5,؜;Ι(W:S љ(:u+ە=J:#8>9εD(+H"q:,^Å#Hֱջ' !*:o;cƌL"^ˈiQhj@.G0n}8J-LףaIU]L]O*&5Daxkri"e3鏕]3u%7z8sޠqK-25ALRևTct`uQ[1,q7oe7iWmﭙȳg^Rcs*燃}޸ꚉ'.+j֯_9Ry$M/>?J_;m,6ߓm"Nn-ֱг_[ 6l˩14g(fs+NB遇@Cc|}{]/ |Od!'"\ʈL,Fe9!KI5X>20B(H-6XKi^x ϻ(ljZup,7f4W |ħe6QXAF[0bXJn% G0:c%:oPt;"E 4@bNF03,*͸ө}!H&H {}鱌0eC c&MПJ&= +EASoH,I"?&+1 VGӡ^3G PT":bj4c*0#C1E 0®hR{тb-FȅN-CpZ*4f[̨\yͫ?vlhg?Xc zfYgHJL0͆_v!qN}+5ca22.p|0埣.I{@[)">[2&kz5cH)DR("#t|!c3Kۿ>L ^ΫgJ``VtpƁd vrTnph <8OX>lw>4>dԝ| %o|':==^N? 5aFw# Ob>]"$T9LnS5:+'7#Z~-.~>8[T)Jg\A.>{K|oEj7W"6/VVq%^iҸqEdy`X(Iz'5{銗 5JUlZ]S:I0pW_1K|ayq zqzw~z*|?w瘨󳿞{wRho ¯[Z?ciҰ^4Uluo]mr+7{|- ڞւ(}rM Tq.R^R_/ k9j-C%# QC N JHA(e"hK .`8&1#-PEwAl 'Xhr񲾍Bzힿ,^0%`D%Y&w-O?ȹ\.b斆b!۠LnL )&]6zB[lf;ȱsD)Tj`(F.vEͼbup촣=iGkDfQGs6KIEkpñO'>'` /4[䵙=MtC@_\v^G_~i;h7J=Yj. YZ%3,謐w7/U(CGnucE3|"S؉. ǚǖ:i6*wWd@vL/*Cl/Nyqؘ]3PG^f8GO;3nȐ;??d(%9 傡# 3 Ζlr`E&AfA:( DcqR nC!kQ U[6Y8VÃmsfy)ήumM?y]Wf"=b#AY2Lt"BFQ4H@Q`o+[߈zWlq5Q|`?{)s~۹c䎡bxƗYJo;iG x@k4I$Eo6ϻF޺ޠp'/kfa8[HReY`>UR7y_B{fΟضJ$k$<=_靕$|Xq$??GoipBGRU^kN1]9E}@H)h;4,(vGE`.XIB9; ˽)vlA['p1./׶<7o4i{9_C9+ؚ*HqYH1rK%bdF/cLڐߊ(kCQxz eS'WERݽ/dPE#o&F%9,nnSmܜ}9BH!`L.bfjއ3rmG2 Ca%Ӽ!ի^x~b/J:~v.\yYy?@\[Ш2;VҾ1ӸӐ ~-ZZc$o'#9hxUK ^u^ qԿے>0hJ(F=μh @g S Z5 yLP.Ǣa`/j \pրFpy]V-6o!n[]#L vU!v>1& $ e C8g䴬1=*[W׍5\LɅ/pkř%)'L[Eeyۤcߟ-YuBoq6٦Fa[5cAnʥLʨ#1 k\@8(%WCc:kdME6gr8fLLOf Rܫ4p˓6\xB!J&(.W *%Ce5j4j݌~ +jamLqCs]3pm ,XAYj&(M4@IOX폵5$TW zvvsu}@.4LR?. hh((~Ž5Y) 'H6C=),]Ry03WU6fp pzH Ȼ>J5Z SG(q˒h''s 4j4TJIŜfEQG,c(QzDQ=,%9#% && ro|ZoS@S$Rq<`gonBsg\_v`NEͬr1qJ,%.yќ FׅQьc-Y_"/Zy+:>{! Ԥ$2hUzeoa.rz{u8FA_ճY6ofQskfWMvg/oܢƛ!_p\s٘pn{W?]p¿]mGHxʚO^m{Y|jֵ4cޭk=>mn{܎S7ʜ !X3/_?|NZ)~4 ?V.jO~zu9 )YxJl\ḰEPQNYON?5ҌeSBOuT(}*%4ZɠL$9pLbŷHßB8phK U:!!2p[N? HuRN P). :YAp=Ty0FtI[Nd?Y:6PnM\qf s0D e 8 .(+Z&XɊɕ-SgETj~̊ fEhXf/~6Rl^ګ\xhbxEZ'Fd ޠMwAz=ShmC&kD4q 9f mzwOym U1.0cnk5wWͭo)݂SΕ:HL!*پWmsGǝzWTzY%;kt8:,5uj\L0Pƨc1Y)lgW ؜B'Y'V2&cE-f%\ۏ dOW3com)5",YG2JYQ{\ΐuu5ϷeQ*X32INr쳼W/nnrPDN\*W+{}&0W4f9뻮2Ї.IK>f!5^+Cs((fo_!ZG$sE`gt"8]"g8o.Z3>0t".'ó<"^cϴR_rhaQ?P gl 8l :%nU $ۄw-[QZJם[79`$vX=l/hekɯ^(l?Կx|YHa6m9GzI%wA%6ry$^.t~]+$>'8h`f 9ɉgQ0%T{{m"u!@\u~[< Y]V%ft0sEB( 5"ʭ8b/:gsY9V2;V2[[l@1 VlVz4Ȥ֟ԈYbZ =f"."zG پV9c!Je$sS @Nd* r[X()9ӞS[tĂ2SI-jA"=&b:uFNK <}:eF'v|{2;3ʟoEnoW'̻Vw䮳6QM]K`y>Ic9]M'7G)7;~߰wP@7g[sW9XnyohAT(JS}e;\"`vy8|`Yw WgN(,3P'xyqaj`MW6#IKj۰o ?ȖV6t\.F*N{42JnII%1wU&0ެj$T $[wh6fwOZ3cyIۆ Zg5ΠLxөdUf1H㍹_1PW]'wǒ 5x限ۅMG{^>|RHg꨽OW4t^wkpÑOo |NVY~pR|mN&N7&00}K#ʝnv>ev2}dgRr c2(PF JbQ^ gIfPa m0&{$jx*m<<Cpo>B7Tn\wׯ (+;?//۫K* RJS}REP{=h; dSwiw?jGu?STAYYsN8LC$ی)O2+Z(VRX]F`vܑ4]$ = gQdϲ#{>oMɱ8#vZGcBx SX:@Po7EDI*N/Tj4vCP q jBX+n'٠$$O7}wzPa>[%}O% bЫs6l ! J`wf c(,RΝD0b\DcJ"mVe (WK 0!S&1je2*`8zFyVF`bS#yNbJq(rneZGEdQ0A[R.k%%RHDcۧ[#g:t5>->c#5@Z4{"ACAR< oo~V~ʗ_3m 6(%׿A$=n4Opיa{&u&fc2d;Y.L'hGY gO~ܤ#PW{P|EJ[F߻ ӏx7Q09Iӯ-x6e\i[(Ɠy'=6$<sQ*, ݰE6{]ĺJ F,Fe9!K-. )!@mR%W\rL|ޅ9'amײУ(%M-3<(̵%L* M 7zR] MXR)ԇa?:RXف4, 3â`佐  "O1<c2󔾌b@ "B9CRP;5yni>%x{ntVݫUÀjNK U)&. 7"qQ$ :LHY}U+4lR\ T%m _h(<{ik,!őJ3U{e兯ΛƏUu<8[B SN5_xӿ*9LE+3NMYmƑ_>m0q-x0чQ0iJ>+N+8*AGiԦg`yK 9_#䠿b/mHSMt\ѩ?(o\ LJ_Nj盋px`ƅOʷ&׭`h  ׿_p@%S v Nbd2ˈ\4%@ X1]oiec ,1YFnNmm$=4g"ꎐ8Bww;_rSw4E\.85i@IQF)3ZdJK&`Lv TzEFj^B1T)]J4a#A2sVk/;^vITv@v#ܧe[h3G-Yl;u E9&# >+>OwmRH|@gLQ)xLbAX#Uq':F4#YNx ("]ZVq2CT ":o;cƌL"^ˈi*~> 75r62[GvS>ɠYE ٱʊy/ߠiٿ8fصEE ?V욮UxwO4Niuoc"Wu(܋ٝI@h}kń3кx:+[stf ]=E2Yŭwjz_=ݗ@zx!e_04<03me2+$Ť39FU [5$ 2!R`yi)W Q\xrطs$FcEՁ˚l k;rkx<&TiP"3KJ=uW;mKMMw3q'\ "'=WO~ Fyfas96JL ;=9^ v+j%xCr foO/4QF^'IHAj^s|L \8BXkY՗R+Cs/h"(foAB,eoM塿t]rV*wZ؇hɎі.痨ОI'HEMNɒx'iA9{u"ZN&рzDSѩd:sX J@}GW!GAĭA,!Bk9%f+8+UedAezM)DwO,Q.oK$zK  ;Yy0f6P̜fƤ;33MDndkW l%r % %!N 3pp< N4R%X 1Oօ$AEem{06`˜c8QpFL8MH.8ţ(`SU{<c-/ HvH4k)+bH9R:#Sh` !{q5 C (ךZ V 1hr;K' Ok +PĦ%:IEĪ{0M}WI %3S1q)2B5aj(TaB_+2̍s,-1DMpQ%I\%V9h ,Sǂiϩq `lbAƩLI-jXzMt9O̾9lOjjJ5׳їǷ]6va ,՟d$O_.M2C$r 5IQDJ>+,3$t[c  1kd4Hc,qjk'D)ZKfTc1;#g;Yl6kh)Fާ~Y˳4l|4$Uʢ_˸Lfc ߋ$>oI T襶BKpڿ0h,+_ _R{˷G@,̳h=b.?(8M;N5YE&/Z/;cSEM_&KiT,| oKg$[ qqaQPZs2t<?Z?Feb+\G__jZZ?b+cfɽxxhqytւ}#O |nsjTŒPۥf֮ڥxYsR%Y6fM/qs{i }*@w';|e>UЂS6xv~._= s 2`<0W<-:?2P0(C&CO^]J,ڵH>I ᶙݹaHwC߀n`GacJ&Q˯7PԨp\òIN 14,<0IME%y546gVe&nϑnB7NlTZR֙.ZRoVBZcA@LH5-l t]9O1)xo78VA2arK!L=xujJmB̏9fǤh]R5 OEe<'Ht 5'ZlI[t(ƮS`4:YO:<tpT+Pl( ~Q3*@ftRhLKX v4:`$Ah>rX06>X{-e'5fm(`8ߏʫ<ޛY܏\I_pU.Z G2eRDxJ!> .*ZvvQG45ji_d%rE[F8҇28&m˛E8vߢp%iw_Uo͑A ,hQh ߽3wn;*֮ѻpcq YXs==3Jl\m9E\33L+Ar<]+yK "tQ$dYj, [C"s&),I,hpeb) 3,aq4:Zyd}?k}fA 8zp/S4m[ ?xGLeN;w7?ct]u-þ؁] qrQtc`c-eiG){Y!$UH;7}¤۹ec$:0uV `p 8"EA) S:%,%ƂZ,lDy(V J@`1 )++$4z 9a*GDK[_1z1Cu111AN(@}dr:L}TJ9}¼&&r2*+ũL}W@%xWP\qI9L<qUTUV> +!9U&W\OE\ej黸TzJITp[~Gʄx%nXUERw8*XEX\vYW ݛb6KDpAŒ-4lL.'"3B]Lg*+ SBõ7Ѹ˸zEi'M޽0pYfFY*: ^L'57dZG<j* s9Ϯ,ãZA& WY FFySWk,y)07E=|3 9h3Q#5"ʭ]n)i 9c`MOF*_Kfj)뻮T2=WkCeJݲ̔(,E`%X\ *2@ګ,W)U ^*VRb)ݙ :( .qFͥrX"V`h2VYQ&qД2{myd-\d?Mٖ-f ^laT7Jo y2i_w|ߏ!9-|KpFVYxh/G|v[g {>OUZ: \$)s<5[s ݓs.,w$\SL9e*`rEszq>5nSw,ϭL¯$x`DKP.9(fR~cEz܆|e h1%"NC q?m =j$ l ȰMÀGcFER𒔒SL.iBg UR 6d,W W gjiJ6ty+a:=E[e1T=Ҫ'窧g7y8aY_Tؚ# _o+=2 ?b+HözraףrX:ϩ".DXPq*Ap ,sڄM=a^ji|>.E 2kW{S@>fI p3 md{)Ob0 pY2 ssųڂ#k b4W*~BdUgԮN , ";Ҷ]ȸfSTNURORn%>Hum64Y]]qm9Bz5v<0M=>ۨW(jger99ܝ.z6@llFD jVmgT-'5imΘs5ʚMtY"DZp:n.Xټӭ3]X'BfS/(ߔUoގ'F_.߷u4Ϳ۱-ژ J:ͱ%Em 3e"ol*Mx{x0F"5QاjR\`Kwh6^VӳySM^=`lTZoA}2)b8uIk5#ZԞ|tNhRYrredi! *MS 6*ƩD%B%xc15Q*'ù@Uqia8nXaͥA %2O躞Gvc !lw[n))yPM aZBҋËfG}%o9}>twE0gVT]{oG*=`GGuws`7Agb#Iɶ|~3$EQCH1 YNTU ]B>T* ' 쉻ϙ1*0ibą`QFҴCeNI5|4hYylQ~t?0f8T$@z%i]Ҡ4!q(jtN!I/P(f"tBFQ,*C" s.pnf^l@/[*rUhf` *E:'X& V-Q[ ̐ D4ymk*kVDNxlxlB1woMy*+V1ua.2d4H@z{Ci<@8*' WC 0[ym>&u{nmcz@!mVVCl4->dvH/NK0b RXPF$t.,#>ZIk̲z ʥ :.NBE0Z.{NҰwAוM 1҃^ikG]Wcx:}pUw^x҉W{3:>7J܊iՍ-$RYv;|v$LK>_jT0~ѫa0T'-h`u%e;/*פrpcp݂Ag>zڲ^1{rfBpxzκr)JgTpl#BJ傤"Jyΐ5 EĐ~цv^kl9[B{d4|>^lKhhpM>;> I}pb/-RGÝ$يTTT)Vw&o窲;3-=Fnt@ceMYt)ss@ Lݵ#c.̌`P`+($tuCB |DCAek'U)x—N֩_ 9mAALeSU~S?E"KPId!Je;ʢ{E! -L]u4>J/]o"BC! ,Zd3cЏL%jQe!^λ@ $XakqGwI‭TB˖$.NN9#pR:%dpG_ ZԼ$hhI@!rpђfhl##FHIZ /?6Tsvi/Y!i궋n2:qϥ 'moEh9: _> |$6Qk1a3˞h0*|锓b IFx#H)C-! zδYi10z i2%Rt^aQǻ/ &Rvz#??s$[3?AlL-ˆI+b9NѧfiH60i0܅_ fwmLm{ޏd{oI,I I()&S$h) n2Ig~pu0 !xv%DY9kuT'iL`E&|j;PQM9׿W#S0}-uwjA2]%Z7OՒËCQZ0nKcU4w=&%_qpI{ m!i|t ލg^k./fFX~mμO|wyq2=v1@)9s<4k+.Ԇ?]^ԡoOp4|Cj$WV #VcZ9"'Fi8e`YŲBO/qop{Bf )ta?i\gjOq:] *z{q6Y޺?-YuyVZkg!Ha@`]⍻O5:Z&_Z77%p2[AH>{gЄӆ7A2qBpFx$Eɤ9ts[f2>Zf)I"dˡ`CӻfI1=qEiűWj[֪]tַQΩǸw= qWu1La2BXd098]Tm4I*)Y]YsULt_U4֋sR{$"+Clb,}2 ୍a1@ȨU딄R픂XAv_A=vCÇ`wu6 wlB+ŧmb5r-#֯)ƱLv*mj (_xU~ /y3y&/3-EFMب'#_HL.qEˏ?f'x,v./i&ifeڦ˚=tՀ_ЁЁOKuxgѹ8tzMLf_ky,tzݢ Siw\{?7o7^s3of^lC߹'}MciwلJ2W]6XϛF{W|~qYO7~zԋږbxv9ӇG ()V@BTe\֝Nw{;&8r RGash,}99Cu :brJ{0!eksK#]6KrEf1Z,PwV:J-hv"׈GGjpW]x5,sҜ=m#eVwܶA|OC53JbX\:SLuݼU_9jʜW}F <**T^iiJ@SҟHIǧ>wc\ sV섍1r# 6̽*TvUe.ms̲}sBZxPƅj,zbS%@}N NA(7u1DhhvhQ9g?}B_P)u|FƩ+ 2UJFraDŽ7~s1F4^*NcǨQ*D|v\Ja. SvUoɓx:[*{2i%o-qf{%VaŸ&m| ~:kcl;t{:t$4эrxFMV IU3A;HVuhP& j"Q&'fʧD$`yb ]R"%6#2  mom:hϪ? ?p?V$jD6̅f(ɭϞL\8?rghC!\sYv<#KԂCǼ0}% ijn_dϥ@7;mHΊ];i/-y$:=̂:K\KqOGPX/3%.,#a+Vg$眼?cˉ}xL35V2^e͝KRbhI'Rt@9IeCcdKB+Jg) Ie寘<$RIҥ*e6kk:Y4k?Lr˩=P5:y`=EF=j/39^/CzRy κhUZf<8dhs"`戴g,8 ժdX {&1K.HP:M֋RĜ5qeRmK֦Y2G)mQCe!, O* o$^\ knsr$LE^ O?V$eTNkHt6aVdu/sǣt-1G. &R) "A!Y",&' Ƽl~4 H֨R۶,m';ۆJ<*h@ nɐ .hM.AUddkFuMRu<~1m@r20> 3Is#@j#ptqZiOi/Þ)Z"?/Bʺ.XiGJJZndJiX(2H+kTQ8إR˿颅Z[t-6xToDObVAA*mh?̅biFuj3~n%UP֪J KsNLw] xWG~_˘!O1ޗpJy9ZJa4L 8Cp,VQk@H#e tj@$2qLef[v^6-cޅ|<|Oyysn<2+r ݬeb./NkӒg_>hxf]DNh2Ng+4}6;V:|fm78X2)bd9:#I+=`GGˀq9s a % Il+߷z%qHZ3D3Ɪꯪɩ] 6'HJÖ55qkRq ^wV:U1>$hWt.EydD$+ޑ7Jf?r^ִzYOo^ִG6fsƽwx4>v02ߢ[ D4a3ad!B46e6fЉ3ƒg'/*l>3TV(c&.$a.spMTI'st䦷 [gCx>7OĠE ]F~oۛ[ZĊFf-igO\vCNz@,W"ωKM.kbN+p)o%iɀ5:G_t q90]56: jZfjbP![*8k87 Ih V}'TaO4[׻ RVطn2Z#}棌(7(0x.U0Ox{wg\س7״F:$T[VBGMiVެiia]*f}>]k"Z|AbQzl:ݭjl OHlR󏵷\v|fLT}(Vn6!0Ӏ@S\Dz-+}mR9fG Pf'g@m8}( 1 JIIe#3W!FbRKGj gh-fe2 s(<[o3\䤘˞iecgo9D26x9g5Mwt$3-U]ӾrQ'ge0&){/~Ka8 Ei^?_(G^LCqA~Nm ː|_ ]/O?l''ߝ-\ȻKפd+Y+nKEE8WW'x9IFzi^ ]uBYr&>am 2 >z?rD6e{wR 9ֶajRׯꥻ7y_kfY5ۛ?\ߌ}I Sonz3لe2{gQsӓ-ycsQKH.靰n$yF'ȿ^qURJ_:]9*09aDr$߯2LXKzu NO1܏cș;8rlg Z++ s\.0Nu-"XP+r|$嬭q M Y1h*ɉ&\vVm|S uo}^?N5›=~;!W6TTWXW^ A\0*zӫ_O9y5~.Ѭ_HOV{׮Go{x˧a ޼P 7:ϛJS׼xz0Aݐ}? Ԝu覙2 %Cu$2P㎘b?ѵ{đ̹ ^u2 RmCK*ʡSO+AUH.:}Oih+v?02AX"F㥲y*Q(ȥ.iLYζ ,zA:$'m!GaV}8_>]{\k Ḯ:QN^>~[{V ɾViG eJs&hɩG{G2hLHLЎа$79`4S>%z 5wAZAbR"% .#2 fF s))2 ZGM ɠh!4\ʇn65lG|nL336 -(IOAnrKhQQFn-KԂC(}'ʹ7p|/>&rp{^|7{o qf֪-86$iRPE)bEr4a0'cNZ^J[O"yj*b?YM,2d>11=-aQԴ>T 3Nr% <ȏ'rtsH9feLY@0J.qqMJ]H*[IxP`T|ifCrm^Eއ)Ka2!0P6O@`8\rA=?*oF#Xm18eV ZZSwθU˨m fUHYrA'21;/"hs4)j%r3-ck⬷Q~la[3㩶жl mg wj Q)!hYq&,/wn$eT@95& GqCvJ6G'BJ ,B[Z )ٔ ^C ѐ-̺,%p'mb&z퇃\nmv@:ְ}iZiRӎZv-?͟#;*gKVSfLJ&@F X}6"DJ IS*4 uI;JOrdO;>)B>k+3":1i޹R Ր(\l1/  `~ZL) l^mY*TrYQPk5(˷БMc_&U?}X/#oXJ;H'He<0wuO8/WFbBZ)*\jsbcd.vL4/ci8 ELnJy9`hl@Cp,V#Qk60_ $f ti ZLf3s-G[gó}# RHe26q!X`"NޒX-"S$r`в&ΆQӔ,Q 5hP!7+I!Kc  ")Q2B5B`_t&:fULP&yIBm3]b>Q5e'RĬ"7Z@%v:;$JvWAn6)mȴji3d=M^dʚ Q$'Iw:v@:6J=to޽:Mh^uٽLQa?͎Lʥ`@\u/y QAj*q*P@xl}5hX{.`)HbA2e@Ǔ.BLӹD)J:g̱;Dh.ۼ&0 :N~]1C91937?pqu]f۸Bů@}h^BNWZY2_yb~zM.7ms]V>[!Z- $"0\A%ǟ3rYt0'DvBJARS][o9+B^v'm^dydg63ؗ ^mM3Iϟbe˲ZՊ;HXMEv}Ūb$)]n*O5Zks( +>[p @_,0arQU7fa{q4,ϒR.1gصԕi+ Ι{eCqTԿ6}̂,p3গ W>1@#9jgę!JGsֳ.p7 ;ƧЩ\O7s2j`8 -l vi fRyjepY2W ssɲڂ=kbgooL-,ŶMDkKZmbü ݶ 14>(\+l:H>I% VI&uJ3uCq+Z혦6dޱYYmvtkg;&)קvr٠eV{CaΰMM9*K~]3|;n7t*-l֙.Azլ|M6U ʁiQp;v87 ppq9~`4ژ TJ0('\j8O"(;4~gglZqzuHV*ƓƭdUnw.-4:Pm꽬o=_r!~O&Tq fBLŕLC+J͆+ϰ F)8 qe#2r軸T@sW0&L#\N䡈L-}WJ.q ŕX,`.^Rv4\[;{a $E뗣iDk?6ߑŗGT#.2st&XɃә\81x1T*2QL+4)!H0#`U&CWZ!.2+ MxR~ʖ>:8,yU#\9KM"eLL/8vv;0'y5z,Խ=&dAR!рrBqVo냹\.lrT}_\oM95G\ Rv/o~تc2# DN 0sWkTalHEQ)x&<\iB)`aF)*z(JaVކT3(P)J}cÞD\'߯o2Ck"e=Bp dLa⁀b6ËhT`R1(ͅ ˩INxK 4$OJF-UL%8h~ XIc^2E-x_a vRۘFUx9e̫g[}3Ɨ;jb<|S(UJ%‹bzy1I5$Ρ+!J@kͬ +4G/2\iu;!䑌YQ>׆Hטjzd&B:"\~,Md+,Bp \2Ȋ:X5$JVi'D֔;(gI>I#:E7T&4ĬQPF.FzI<!Lh)c+nT1;#g3WY/ϧ%3ͳPk1e pHH0es8+:SzN>S?sjC]/-[>/fl_Z3%c3 u1|f79k7m뮷A-` 'htK5gJ0K *7\JNB\j&2FoMp,gՙ|  ctFΖ^޻*Ȁ EIʨ3@X :jqCzEnO4X {S|RP8?("MC]%Q>+A;OuR9\ٷ?/;PujIШEL&q,Inpa 7<}((dH,OcʪbBpSgZJ‹:NUA?hh([;㙵5"Ǚ?EpE ![ϜnwAܓ)'svuF;P6TqP ][J]ᄴET''LM賲y QI.Rfw>_B@]}>0*]p*fݞ>=,wYRua0kc9Lnb;o0ۭ?̽834L[;g֮MŹi5`dـ7 !rd 53PLI%R9KVPYewWOSB=̟ndrw-qFZb}7 Ӣ`ͤ-)?V8jepY2W ssɲڂ=kbgooL-,ŶMDkKZmbü ݶ 14>(s1N)')$*6 Ķ.}k5g5IM W1M=m#55Bͽc6fg5 wLR.:@]am RCMI!7-!*.3@Fp//g/I,`WOl}u pʺ.!.`Y`rFebNM?ejC)$;^ؿ3/@pC`JsjnD: qT#* Jjnvy{Rw@z%ZԂ{&DY(_11(JJGN꠽*vs2)wtIJ=swDž#z:rWIZhe^fL2$=5>0g(>UTr?}O j~C]&~dLfWkq#)) rtsx&_) 81ElV$Zd.4Oz:ȎW;.l-HU•a[_-^:]/~T &_;[!B#޼ SBUGԴ͚g7#Z5{sǫiEl1Bs~M̙2.d|r:[[nnvQVv5N>Nbbm$ZGrHgmða,2;by-p*|/tu3l<]39I먌u>ɶQjףVp!1>< We1=9k:U:#?8Ev￾~_~Ïo}Lx?Gu+0>uo#GG #3Oвq\m6]κY-ƕ){>_58?ˏex3f'O>t5*g+2+H6lVz\|EM?R0? ѨC?Cݟ1:8%>r2G1dLŠIN*F!eks8K  c6 2x1ZȑagU#g]n*L'YpďW7.o{t#*Icvu j)ݙ[I[%*ԑ+W-?!=:5JiMV+&/WKl~envmzqP7Znl݉tZ߃gӼVwt]o9;ƨoYs=u}]ǫ Tߴ2  @V;K}rl96go)̊yAa/i|`rkxB !T#^#sjp6]『#vLtdEG {WRHBIa6 υOpA›oVilJW{q*xhoGbG'2_Q/.yjyܰdxJD( e 99*1g]!>60xu{V[Y$s2GSΨ-Z̲f |)g4ЌRch0kKn=P( If#o_19XHHB 2z j[Dq#o{ ՟={l߱쬋[0ʸ`K*<8dpq<̌5?dqNw U52e0{OtzR0H<Mt)2*ښ95vr]X3Յ.4.Zi%-Nʡ»iLٽQ~a/$"r~5O`~3.ˇ^ײ5 ,nHAxa|uAS.JOب% ޓS/H#Z p'zKs&i MOyd:Y)ghFx ȉWBF0M=f)~G2޻hSѣhSQ);،?&8mxI4-tf<8Y6;JD{`zTU`6:`4P-4*Xפz,DFsLSJhK#gzQ]&p`{D"hqF| ų#!~|Ėm{}6!h/;jY ٣ti3uũAHå(qNo 7HwƙBSRHaڲ-PYVgro! )%jodvF2q.d!v(*Tv@P@* K9HBNj3t#奲xF-Oevʝ/*1H1듌e"kAn&IQ*a5ը=2r^ '䓤둌S(nw Y{Z#R*'l Ns.I K*FV=[z9[\o&4hqq$=6mrB*%񃺬Đ9AIML PKd>:!)s PfYzrN::Oz - ZZ!Yβ$t.Β,As))5Zj]G%eB?}ɇ$6OɬXMGKlW0T15CߎF>C[կoٗiG BqO+'啻Oxc2K뒢O5dK,umd9nYz]ʎ mc,Yq GzMղpۂ KL*uB`zfU랃64yДg͸^Ta!g9c(Y\N6EL,hf(BH>*,Y[ Yi`YcL{?\+ME]C97j>6;p7sH[NșWݙs=fr `_s"裸ctth[e$+@o4sf(j ĉh.eɃY Pg(ޓHcݣHc݋i Qc:̹r*QH QtS+6)B0*(54A-ۂK㭯j Ii_]&c'0!lc#u;d~Xl ZTcxD43{" l>'V8ˍG&d2, 9;Ͼ4b%83_~(zzm_E|:8ET 5 ) f|L.2Vq<9#3@s{_g]pl>神6X휮mw^}_ ht k7MesrO+Ge rKaP!F)" F]QWqϴgR g^Cn?C\<-N~7'#+uۻ)M% jc"`y:|`Y!W'A(^X(~2]j,;e)Eԓڰyw ?Jm Wdmmx,Jl1mO\om}x}7e;jڐkőiAo|lP.'wͦ}osj>܊kf$tcVΰMOxKTb&ftQljr>ɼS*m,օd@_vkh_^l_xV E-IW O˟XJ+o:DE&F(>0a$D /hFM񜘜BFk>- ߩK"q/{},wPo>^5 ::iTzN`?JCiBKbaU+ XZX41  DfD2#?Q}*oC#JFifc\)2flUԙ:S<R S7dsyy/g6~Lqѓ fk`<ǿ5ݟK.}I >潺fzyo+d2Ms}ا@^" G 'lSsN;؟<ُǦkIYj=R'gI''JǛՄduѻ܋謕y4!u!ԶdJزfk={j}]Xe^M"ob?\89Z@mpfw2#.ҕl ??{?|` hƜQ:{ߵ-7]Z(®6:8X]U+kqsM'a+1RfŁh(l܊y&/. ny9Z%Wպ] =u7b?Fr?ƓV5|+ ݧqp͏C S~(V@؀&ro,q$+2eUlGeʴV</qV)Z|b9Oe2htI|LƤ*F#t7&ruPHY"!(OYT)7X盿_e Tug"ť &_QM vw}fivͣgϜaIhb䭦uuͩͼm}zf_2%=V)twYjy-ڶ]~m{ɋK럼ta2t}ޯYߠ#,ZdypHGKi9[2n):Ħʯ})TyyR#-%RۦmupB#koxwJl*+"@jպ qQIETJݷٸKE[ڭ)B3EA9.4RfiR AKrL4:%E>9(\qHDR<߂zJcˇVߙ N'I*nλɱGBGB37ttK9j䏏ľFѿ6HGqfI E#BG=.8B4@K7j Qyi$vQ*E$ň7D :I (8vOw+ӝAQD}̹s4wc'il[xVᄂSk&U $+Zgc4s8:x zJmZҰ"-5v %{܀o+y$~~5tKe"rTۋi<>O?Y*bsEz~>ϽW/Tr%ψm"=q lTTT''{E-g%/-sDLKI)7!*B%>ߋX(."O{Exё!Ye55)@sp 7k7m8OnU87Yh&=TK.qcvۺcHQ)cqsC:r͎$]_x0~#䶩oO2}n;voAw [OZd<0qmH)2"g$IjE :{ jhD+|L 9!^66QTHG4ǟ?h>Yq*N)}Rݤݸ)4NFBʎb ..#9aK w#w@zG=b ge4%D*@pjT,eQATG@q(x 2Zi#1ȫQJ[/3g_ޥ@wwK>WDoOArIj3 Hz'̧|ҷ3; b*y$cz頙֣$ <&(A7L!R%Z3+a`E_ԧ+TENP Z7 4eZ1VH8i().8`f=\.`>EWq<# R4Fр\Ҕ"5yU=\E9vU(T0|@ƨ $$3%i`VU+2Ȁ9=0Se80(3Y`jDzM^,fΆz't2|K+)p>8 qۗ]TϟzS ]NSCcw,%yUC5wx`u18\4u2j2Yߕtd1e{yCEB"fRi6r:᭶&9$yBNR^r*yR:09G8+Y#8({JBt3V,Y2 dD]YdDU2JFTt4CG6U2 dDU2JFTɈ*Q%#doVhM;{ ~5覴o4HxE~|275 ˓ '}/*R<,UʰT*ReX Ka2,U&2,UʰT*ReX Ka2,U"*ReX Kaa<ٴ施/G6̣b~hNͩ9U4F#_ B'5 T*tRNI:B'UdGrtW:YBI:B'U T 4FV2XNI:B'U T*tRNI:0 Kg8NV\o|h[3KO5Q|2Ik(L鴚Y'ۼ]|x/Ļ;[p'*$KUU"XRHr$1ژ j!j $,P9mseE 1-6#`A d)s6Bp|>͙YNC`wLR^B [2DB7%tԫx}$*a1ݳGoق [HEeUTS,FyVF`bS#yNbo_1Ga("VFudXDcZ)"V_"D49#g.e=2<9/+̋rMW tM_>̷7lIc&˨\W_kgcZkJ--2Hʝ/ؔll>l5uJ~5nX''I!޺?\ٝӗ@&dzꋑҏ^ywh|k_v~_{]~>Ájh4Y凹gA˴esbzWO}=D]]&N ,Rg2D'(DX,-,xGI4 Gg- p^xF_*jN ,uבgD Z,& C7ElX2ї3\*emgɼG2`f|>Yg 퐫,XY(X(ފ|2Q͍ݽvVm}@hDLv*L%-w+5}r4 r[o}Է.C%hev >Pn' WFՊ>oLg q"meZYej[fHΠ_OxSVJհj84WKmu~M:чuZ_88y|YLxU oOb/@..; o ύsV1dQ1E+=YkF09HZSkBxvMofwmGx"ϗ,I]W- QsQ7)Q4O~kӢ)-ߙ^NFU|`?~91iFE W@ۥ?ׇpcq5I^WϱTj;+E|2'|QƺCK@l#8Fcʼn}&|*t,20B(H-6XKi1\biJv=K|e6Q$ AF[0زRr+ (b[d W)g I^)( vxuXMA .g@05` ˥CRU9 dcq0&1 SŸpU\Z+f[_^O/^/~Tj*:>tk!g75N~SBڔ9Oō9y:_pݯ͓.o{a91`fm^ۅ@i]$wFo@VFj*;t5 F 9"|Aa4u`ZŴGëBWc% uJQgk 5Jga&#i`ĥ/+_Hx!.a=j"t*'~o:Dž\;?w߽x:~?o?~v LK(]$P1 nDCpgy_Cxb ukiS0[;7[kSFݠjf au:#5+$+/+@6ys64 vQST Z(!gtŅߧ$1H7e;/QbFM[khЗR`i`Ib%,;JOq鼲v1㩠ĩ?R'12JeD.A X1]o"rS)#S@H}&;px7r<)3=0?0'#X^j$L˯^Kcڒ-Jm|,Sν`S0&so &#Ϳ=k*lJa^iGjd^ wE%)c v7y8bFzf-g$al!<~S~/z0FKdRKL6FgT)0e; S3C\[OtBI .g;U0\Xpt!j{L@Q eZ30{-#cԀ'}K|h\oentP6N㦎s\xqHUs7J7zEM 㢺d?슮]xӺ϶ӽKF-"Kԕ&BK}Ȁ.&oyɭY%ԺlZXkRyʵ]vqf=/\xI~|ϏGqWWtM8o2M[.0%Gs֜Mtݣj2ߵs+qS_ܲo<9|=*<bo O$rܗZwDR\x LPH]F]%r)ue;_''QYVW/P]Q^+ 7*𾨫D-"[W/Q]1N{j}QWZU^KOESO`joUЮD%TWt|`1ryղrY>*.- #%aG!Z?�}ULӦiKLJ 0:h%%y 8'5-"2K&febtg'%i pMy;:?v0 MACXX-3[b0j< bu͟棲YRqXtğ&&MloOɈ3@8 )ìEtyUTg 1P{"߲7*=2a޽910 .fdrBd#V[V ,oy 7ν @=2 ||D$jJC/(T0Oir@0ߣD.|OF Y] u!`Ĕ7cdz IR2J|X DRH|P,\{s(R,ic2R. #1)m!q+hzV! ໩M[#8 ?Ehs7a7ϓ?fb1G-ૃ߇|7x%}v'_-F+v\+N8m$)\ȝgQ؜q*b`Ouаk ;Š^5P2zEjPG S\-&x#32x5sf"(5co֌˭х}qcuY oL/S$$5׌WWWof89To\csTR g\k+LP ( O@O I^(TTF{Y^56U42XOP& fKK%) Ӂi'YcFjm#\;uZ[g-إTsBQ=uP V LZ c" lJHI췚aBg ^<^l>llvy6x7E' px8dXYɗG>un  JhJH@ږ,@,V,*I,KreGշW/#V&_`߹3{7σYeYAF7sW;e%=zdz0%M%,kVN%3 NeNԔV/sJm`Lfo*YƔ94GIVg>1*Ӛsp(T9ЦS?˳9.:+_}M:vEoYȲ#=A/nRvr3l"FK$TH@1d98ۗF[\JXOǵ[_lܺ2NOa:YJ{JRQ*Q5Nh:KFqȧ_9i0y60VPԖD1dI,z;{|Ki Qc:̹r*QH QtU>5 &Sevd\邰.E%f`*C%JNcBkǵT:;54Y߈K6dLI=GޣwNǴeSQL ̞<=g99ۡyPJѮEzsv~Z̬,hCg|?6QZ;V9#'#Q'Lޅ@[um㭷 8& ǗeX ɇKu.;8c.7f~[tȿc鮭33$arus;?osU7rF])~9§fUYerJ9X6UpmdU@;kr K‡g22rV2uDx78Nc/ǐˊd sŲ%t\Xz8P<:zx25SWNYJ_Ւ4m/oն Wd>%k[q96K^=0f-bkNiޔcLq~80zXހ^NrWF|t>6@T7Q[3@Y/O >\ ω%Vc:&fֵ,CeÍq}[ϝfb]@4Nz+Z6$:߼fF@;ކ' F߀\>jtuÎ;kTRi P'ȚҨe&h3reRU_je|iKƎ-CݚhOR@zs>ЊPR^A' 2 x?<_9])60.Dx ȉWT`F> xBEҶECiۧg2v" TL<qTd.i\K3̂ٹD)a: AAlaHmD"GBsuQ9 z,DFsL* %y4ҥYӅ!2 û) f< _6od*&5$W4VťͺʑOگOoQ5~å_F#'ca!G @sȥٺB`<e }̥\9Sly|@-"4 0;LBV#mQ[2)%=A2ꭦpq˖貌L;o.;U51(2&JELJjUA PQ+4 ]Yµ<$^Z$cg~7] 6B:ֱ ݁q2*<\yo|0QȬ> AHK.㵺l+(s4I 2@-[>:*k'Vw+}7H]Ӏ&!C,,Tβ$t.,A~ʔ-rk~Y{O_&!7!f9Nx=l_TDTSFCsЫ2ۦ!/v5 GC.j XE1x9z&7؞=wژ>^O|m5$81[7BΥ &,Di5κr) ebA3njScki˭fj>nfFK_{s"꣙ޡr)]Δ.h]}rإW65}97X^CV3զ+ *ۮ JzJC*p Z mWWP٫70XvF]!\!+ m1XP^]Eu jᶸw'm22\ˋ O7{«iÀ ).?Ԉ9ėSG Q=J\k5]Z5h5mW%g~jZS`G^ +L>QHo˔w8P} !?QZkh< `)\:^J 0o1z\> ܮLPSUr~9W77[1zS ,j+ߟ٨% ޓ' h$VK d]oL]k `Yk \-fR.7 `:.+ꪠeꪠvQO dlWxl;Dcۡ4] PWЫ]_=V!uQW]QWQmWWBHgގL0RW}Wն+ r&ۮ J{uՕ`R!9uu,pygZ J٫]2 `; \h/pswV4-x\}'*6wW8spCkt+t;Q9SNd\.\0Lo\H]tٟ.]ET~~{wmH˵w YvvY k,99b$KZ%ʖVSdMX= 9$[$9/aR8ag7VSdl2E%x}=D+pf%zN"SNjO4i&;;s){gQֆ8JȆfIY_Żލlpd5D^OoΗ?p|:JauM!ob^q[HRY[Ӭt)S៵O׷l0X0Swrs?K"j_nuK;\ ǵRs %!׷t km(dy 1Q8yN>=oݟn*#[wնAhu K>k Y퍚_14[*U8'T6{sO/~?}㛷ۋ\}ku#0>'Co˃E 9@/_Ѵ,47oإiS׳ߡ]Kv/澿5Z/ r0П~|w1Y̜x~DVϠ+aWPl\ g?ROj!mvmr8#5<>_Oqo( ˓a䓓9Y)}-,KGN3G,Ht(=az|>E%EyjҎ3ogR۔Z^o ߙՍmzovΈyb^Sbl:l{cߵ6MYpUt;Ev&qz{s8&͉c԰|Xʼnm£v=9iV {=<*.XPNʭt:r_<Gq3BW3'JrY'R= /=Kkt˾El04]{u`]\N?8ka흺F w[ݻޏң9Nŷq(@sSZGz"\CL$]Ԃo> FfȬ{BG)C2.]A{Q 2{S,GT!\[6r9a۫o?Wa Ͳ3Aov3Amb6k>܋V>jG6 KK`uTcMoˏٝ!L1x}Zx&mGʟ˜"8Z`A*"p] @u\ eE7mR:e)OWvYj;ud}m2҃VI$>\x) ̤@huy(8՞ -G~V?m>χc4>ߡ…d}}4^zuf8fT=ۅwzt3Qkٳ(db}U 6Wk|omY KWX.]57W9ltQfeA=4aI)ƹR9  ֈE56C&{60ceɕJ8% u '`& 2x"^Ca]Lۍab.6;ڦ6< F&&.dXN n  @F*&R,&v}XAdV͏=5 e5⬾N#v-˜MFK)JC I;hR^>w.HMjDr(&)MyЂg\pzDE5=2i&4 Ϝ/8;x\F|d~olfɾz֋Ӌ^4S  佱-q\j%&feKx~AsӋЋ͎}!+~*`pq U]ry?~;z?3BޏVR5ywz?W%n[,[hS)FE%^TN* a\;ќx:8uԙwT!p&\RV0L>X[ F@ 1gLNl[@'aB>h.МƆB4*QI5.+$W(UF2zu8.%|Jdsoͧmh+^l0utzS]5= ,ehv3B'DB]R Xe""Igdй}w Z'˽uPK/K!PDFL!NxħTIn hBw3+О}OZqCe:Ķ>5?)ayTn@̆m;| if:[}LlT1~b0EI+QU:V?B-Lu2[l͟,bZD8XA!9F})>H XcMzMly))\Zϱ@N'mLT%3⩣|biǴ҄qrP S.mx5y3{=H5|q2srOwQw&A>E 9>^.J+袔w>ߢ.8S I2 PW?{\9J*Hτ"b9]Ni1Bzw!D/"%ٻ6$+ݗz`^z0w8)(/U"Q4%ف처bFތ'ROٕrɵ:[f .Cr.\ IH+2p/C -D"r1P𫛛?;t­%f߉?Ɓl mYbL t }m^; o٭45w"/ ǻW縶t1,n.+w::g,)@WvfFt.*8h5?gu"{t {ehv_}vd'g9KማylsQ|Ek[`wV^NX(|tp b>v;Kc>9r3ij>,v$ $ǖV(IkeCo@j&+h$,vtE&m2mGt57k>3a޷cFDݾfھn5)p"DhZ`D6/rLVꃦJNmJYm ֩uC>iQcK(AHUh(O9?Z4݋R9ޭ[liEZ)/i!ڕZo]T8eYz;T"K5\s2X5jg*7&7x HP*εfDiL QRSkq)mv>+7j-6y#ˏ?́|v[!nLVg )K2IC =F$\KSmc&Tg̸DF›1-.bmt.)KNI8_=#-&U_qVG.YV1+!h0ǵ; ۩$+Qޤ*Cd`rGL.\n4T{}2᚝-_:GBF?#.Flќ?"vE!ո^H13dɺ5=lJ97GMUEzr7NTRZ<'JIInc9$eNJ} Voa|8R>k#U>)e{'餞.]FT,XCD ɞ5 RȘgz KuXA-^NRȮPB& եfQ#2L3RQ/3>*1c5.d)5>H`"VM IJ%‘FiSDDJrJ%V 0 CMZe&5:/3(|" H&jZ5 @6!+Hԋ*]7xܣi=i /!!dZl?H퀱NXF& hH `f%݂иZM^Uѐry{oR Q$4y'*a* bV-6mz܃|w%/G1!%F6,k*޵%'ՓFjn0&z7wFo9)̘b. ^Or%5I AP`AH nQ%fTiOdPH;@qX 7m#2l)b*'('Mm2X4M57J ;XI_U 2PhQLj0R܌"2!0x#mIA![`qO+AU1t6lӺ r0H@ZpIPAlRPCx%H`u20Kfj)څҵuZO *U%zf$YcU8dm ' ZSti ja3^\0##Q ljrR9Ŕzul)\Y>. L\JꘅG@$F`$9D. XZ&J#BRX~:CoW4<<2B.8!TQ~ՋAc/˛n_CA@nG2gvw~ūjRiu0E69 dg顴Yuv{׋uz Zou[l-t5lqܶ>|Zrֺ+9=m+8w8 }8<ߺcްOVZI ?zGokv:6~X_-XxO[ik˧˚{( E4)ښAZB9?ioͽVޥp{3tFLb|0e}C%X9g\قIW;"p%<6b~=N6XL'w]N ޸,wXػз"~0O:`r ͫ) 7T9@vnS4+=nDB G q>f^]/8._O\.7G|a#q;?qL,}~aqV)cF=N g8FNTmq냾 |0,.ˇvY5eGz'G<4{IcY`Ѻl}q|q;|Wqu٤2 y6̽YQvő/1aFګXIX,E6x}Wt|vbY]F$t*Uj[H(ٗZryEeBs30mkl2Mh<ل'dl“Mx O6&<ل'dl“Mx O6&<ل'dl“Mx O6&<ل'dl“Mx O6&<ل'dl“Mx O6&<ل'dl“Mx O6&7ل^ۋ&6&doe }97OlBVM~&^$N>dYg+~ꬓVp 5V4 ޞY'9 D= DW7 D`s 8Ŗ4/yƧߥ58䃢̓xz!UIK"]6+] җ(2I{x0ad߼B?=F=K}zVe$KfWZq(k?H`E #_O~F;PN/KIMXlڟlKDuW{Ftyk!2Ş ce2s8[‘z"@ UE/t>ctMJ7>%QujG"8@ʓjy{M1֕M܄B#:\'b'NRvE!FUU41bR:clWiErnk-I"ѥX}ԗֲ////////////////////////=AMx;K0׉7hҫW_QϨO)"E{7{aJC٘^e{?`6HӁOD!@[¸ _IznA!*͗1v_{ԱgߴWUr:e^~{qi)`RsZі_^l:jjCLw{ & 鬎1xhX_z}xM;k+`C[ Oo\_WB`L|L5ߺVUFZw;_j|6٦nvlC_ Үs1m2%ADrjfWD6qSpݖI:]ZWv9(i&됌06[MB)),Cb:yd2L*7 E&3PnzEjDD5K:xyus*O5E+f4JKln҈6U 8 |.<h:6p A>/$*A                        3n zD ~{;B07Q!#P^TVJŢB?M wb'2.eh?d3=N7X.9$˲JeVb&(-yx.ye|]W PI¹iԖAQ;۰`,I&սOu&$Ur_ZU쏼9tI &jK=v3|C^-JZlDnh6]c'UCns=&EJ~,///////////////////////?~AI> ~NK:%CZx|ۅ৭a芞!NAxT0$s7S?XIPI@Ih/u8cL˫ K@*E2VeKT8TRTSjqbpS7Hԍ7?Y.Ev  `QQZVZ289mLp4:ϙ6GלedL㬩r,KdL͜^4!k&cʾ˾0]a^` z6/ן!Q)0OnZ%vz_[Y9^0St>_[A cBpn1f$d䖀a DR%X.F3`S "R_q .7DAIGT R-cܨ00NDƞdӌ͜P[o GjLɻCu4>Oխ(4-Uw Uy |:L&C9 TFQ3ǤSRxi^`vi[+mc4 [)F^<3.|^)!D/-LscJ:QKQh,a;@$رl~a Z]G6Dr{vC@( FfK9@< *B50a\IA'&xS+:-SDB 6:J9s(o"Z^=&ԅc+VKf:nMT)60S6 6XMK:9ܓ;djYAW&zXa]dCuwE@ۖʠmv5iMԴ"p9nT侓 .ёdDUkT秛'PYkJD 1>HS0}Q҈z,Imqk2kfZ˰Ag r3 )bpA$#SDu*μfF^3Zzx^MwC,L{nuGӋZE}u oӇ<;,#/\qSHS0i҃xZf_+ nf//50E˜+uXm[viZM; O0VE_ېh{oY>@7-Z1zt7o=Wن ޕw?d]Uw0#m6O'.U}3"~O`M<1TR}a^t?g?׏>&?:I ?ۄ_eC0ѭ/O5ukT]ns>u;ܗӜsGÇa}C;t@~r]<%q7mvҦfi;~a_Ӂh˪U|! 8~Do]=݇>F!xZ9>uG5&-" hKfĎK)4``pXd(GI1+K81jǸpW"N-8QD.#rD.]1k$&1pc,;GqOqY#jͩl} Sô)i *1Lä2{ 9n:uyU_&.Kjڊ-*m|R)%A ҲK+=9FgmGZ|jaI<ס30j[cR;ĬW逰 9HysQ yu Xe{SwMp L:Z}]n|3 0̠\ oaF! 'OC#7(]^}n~/mh6w+ݹB}.Kżϣ$s-Yk[Z36 -L땘\LfOo2 iiSUJ^aA6:ESoq}l -ϿPҥ-K|sm=1 %5Hc0һH Wpa"҅u̺'C0!9D!e!x0k5f,`ZFL&Z']e>{5.9;2|Lֳ]ԳOf(1;wۓ޴R\ߡ-Oڜ.MBEM?v얪Yx׳.&ݚeȍҒuյIDhRMO bxS3äU[:c^R棌4[vM9rck^*?L ͯn#mޟ~tKބ6Kr`KOaKx5?/szH]ۚOٷ'ubHS_܊o:91v]y/*lݠ], Tv+mCA ƙARLYn)6x#Jqo+j`(F.YcrRn`os-zowxL/'*fc[w-nyˢwr,aT⃱/to/?>3 Z)$C+KFf 4rPdDXaZ@Eh,c N mP 0dX;*agd%Go7!`d|\Q&E-yeinmVW/iri%Gl$L(0,E&: `Gp#Re qkaP\Nu3x;844NCO"/Mv}ſy%& #o5ۃףa= c4od%R Q|0a,$Ӊ){*9j84@Kd5ɷ~D|AO?}#CVp4xI4SB*Y, 3#NE̬a.GX VڢHqbrK%bdFE3sf"TndfvdUaa/X DE4&1eӻ*7~[2v4ξsXyθVN I^(ThD$.+bSE#fHΞQk % 6N0+XB/K(R:"'ʌَv\%hK;EmuA`7&%J E!C1D>:D`pO**"%14 iU3b^Aր D91Xf"NudllƩoD&`<D?DD"bW_AĂ}G>mE0П'1>J .Z \b0FHHqbKFbS@=x҄Ij$fX;c6svf-t[=ld_\ęq\,1b"%9m`d)hǭzPZ .}dC2&&+ JяSL%U{Xy7Tk~CnxLJ*d4w\P̣xxG} %GNE1ne8rk0 ƂP\DRZ,*ZbN%H*meqt6VAmrj3 @_sМjL t.svH1dܥY󬜍tys&M P}\3qҗfے){UCjwIL᫯W9.lj,5Y;%* kpOo |IvYۛ>C'O?~L?Dh!p5=gj{ȑ_i6-E@d۝167dȒ{Wd-YԶ%[3 fb*vUQ'h;{. R;Va7*X~ z)ea$O.=93{2;ʈCfDg4jh8n[$.;Ս &cG^vceiN|5D.Ւ*E 0Ïk-{N9ަsMgm:;]);sVlG<@{t=*̋GE_F*Q,A)ڦLjyGXx}i:/#,r1DXa31rL J&sedF9H2#uB0,#2 29uRSFD0⍢+Eʘ?d4FΆF)xWACn_ vpԽ=88X{غLeZ5!G]NjT4Y4k.4rFx%slȍP7Dߥ !l13Zx`)3kn2:!*KIPܢ>#g/v9g_{^ [S1qg4%hz@`J|[Ceh߿-F*_E&V{`B!;\*9? +\%; *z(pj T#\+`U C@-{ T2|W!K_;pFP.>''ȕ(Ŕm.ӕ3YI KpA4?}㲩n`qwxI aCYIIJ& Qo2DQ0} fɂ*NR\F'[2m7[lnyA*uax6\\m@h:)tC[2ꔵRg= FICO?xf`Z ũ8 #\}0.P+8*x6C@0W\.;\*k;^\iPK4 1o@{F <){[ˢ#v@secj9w T zgbå-$ك{LS{#ɧQ+{_ WW.==*: bp$W/BL+ ?B@DSp $t \[_1`0L~ffgYUɭ/~Z5&42KUD%9 䪃Yv@e-Q∳/gF } D~A?Fp BPfﯼ??17yJL2]9$^TRAsSmKy2i/'?_'GNRfϣlэBVi+MY8[U~JU*3>dZT3RTZahIqW=4l~ I4)rȿ8&([?rw[ɏ/a6WK\D"W$7ฅRc\S :xJޗ O@|n,7!eٍDp*5~t[ϸG3Ni\˽>G^#;u/B!dݤ#Ɛ}^:IYڇTyi ZRp#X-e^oǀ jL0sRJam.O ڄjtSZ ."aU+&Eк8c}ēVZ>W-&nfgzYӌg: Lz; ŠKR3r˸NqV[MFs}ڛZiHUoX6MW\VVb-FʼnS CvR7g+/[IJ̆fgEi=Ԅ m(eqܭK|&wKaYk0ִAMޓ7O*j4DOFD'5\Q89?iө.5%fy*׍.|x3Ӡ;Ńj S^z&kyMX3s.r8%"-s84dw8kf<\k.ݎe,wS/rA\ Ëd}bܶu(gp{ k n>5g2mbOn"͎x0z[+g9zF6bkwGky`+n3Mh<5ZeɕR='=Kۊ+s+4(!WNʞuE0.qSVKcޛ"; 7ϗ PS%|zC| 0 Tj.#^NW"~@p~*+ ;\*< +1*';:&;e;\*#\= \I8E:!yH))ZY>&'Ldf( J#M1#;V;}j찲Iz\&Vv F8k萎1P@Tn *~ưClcv8_F \j:~w T[!#\= \i[WYYw#98vC3*XH0X~f|?~ ֣?+nzƯ43XP)RKIY1nR'*2cѬRGXѠ6jp˹Ƥ~IAҠ-H~8yŋh$ߜ(I2` +4Ia01&α̐ a(29!Bq/6*l ;qלqPkbB F߿Eu ~?lƷ'J y;߻.;36Μ x&+eQݴlOu(5P:BrpNIPz2t lr]hǕ]}лoU~<žO;夼ޏMR7_&]7=S) Ѧ ITʻԡyO;i@ٞmO>5t"| :[¢ Сu_$v|;]-ϤWS^ tBa< f7'ō?g84ٳ{)-_eJZ L'c_.o5pdGti*ĉAKsrc%Jh&2WИ`Ԭs1h)|e{`%k$0Ź!x9&rc6ޡ2Z 1ۜdќ,S"KɟK0 e9fB ={w4]Ģ(uLM(XmI&?J23F|e,&9ssO4Cb6)GiSkʵ`U hѾcSnfSwvGqcPR06X{ Pe:ۺ;ީhSsw(Aᅵ̾8hx r6>Ce7Jx>3  7_8(j;zn6(@iՖv0؇_mx!eUҦ;YՏ ڔ/z-"aLp%1fc$e3$Ra+H$SM#F0GDxD\@,åQ9 **IG1YLY0vQ`TX9r=EL ˰0gYJrðM5Fz`g.;"rw{iX,y{^Uhnyo;m_M18x0Jޟ짿tG[|4HκIֲDoD /gm;rm̫.ۗ [|0PpVh&scYz~/D桞ebXKm^_TY9 jt+(KuߏNowgiG^^7_L1?X:r+gE"r]7S29Ҿ1[񊙀\̤l=abu'=e36EF\?M((Vunt6Fo:9a{.U_NR&i ӿ19LB{w;(|ZC~67ө Gʈ[pV $YU֋,vMgmURLTe\1ԗ][v=gy&\ŶG^Z=k߽񔍎ߏG"L`j% jXiI9V"PnYnQX 9(+d2w}yn;υgJ7JkRd!%1SDqQfC_4#4f<+I 6?K ? wDN\Wk=V|_xܠ]X R:zL~Ta%i͍Via{iYGmao1f8Kr$̈́1gb%U:y!Sk1ZL.QP_i[9WgE攑/)#ccb α:7#D#LaiDEeQSբ\;0EJ Fj*u4ym#GEȗY 2 Y,؁b-9s~nɖȴ-'$l5Sb0IM@-hIP9j<:+E{Q;Hoţ_(GijZ=e%}{ey|i|&,чd8S@b9fZG E &I 93 3!@%E.KRYzkd-fiAژ(MR5B`:_P,UhBNgKLDMG-⑀LX@K_v:qkhZZhJ͇AvIƲPLT6T`DP̒"4c*k%h?$'@4퐌0B־ #ׇg|"w hR;>ܤ$ҐR1-Rm2K_OI+ֵ=:S^ KeRTPfLJyQTB\Y& |ʍ΁9>շ{Pm^znopfEVM*F8}e<5!a7]6Mr7W߽O'Iφl?/;w$7?$EoXoayɲ۟ҪMzVNO^ !7l0_K^`{ z~zOiM񌊣Q-Bϥ8Z]/V7G9'{0t{9>9-Gq֨>`&4@6ʳD2}YPn0CcDilG)YtFX*H)ELمHrfd31ixgtyZAUjBgKo~(Ο=D<P[k-z},`T?LnIA̹ ͽRDO A*4:Ex\r"rWM؋}068I'1ϲͳo"2< k!B;`2ch L$kLm5CehK>R'(er/)s2(v>;\I=3ŭ%ۉn&I';J rYCA]LzdW^epdR גD'A@CIUFT%{T v SO-%m2$z7,$@n1ph:}./~`'a݋MVxl2A!7&\f9pRJ9J%W #du((A+H$* dLIk4 -9$US J&ݞ@hy;p4 _x\$ۓK\I(RjVN\{<Mm:8Iɹ>aG⃙ye2_,7cG(~yG7"Hq c8 IL읰He9))=18Wu9Ⱦ^^ųKiӮu:gs_yT{:8XH$z/u?o~ue&]^LjYs=~&>8^I觳OMyw`j)NW&YQZ~eeQbQ\<72]1@;_QA_0;b0 -$%'j4`u5R(L|MIC3a U&Hz'+9+FǒsȸYJO18 $ST1IN]'>@9Vv_l,|oYlDHKg::>;GFfL"u(2:B>N=Lj*]frRIm+夾_5&G I XGʌަE !rI^@Fs7"zD"G! )W ,%g3%UuIO0ppʽq]:*mqn'6Á]־(.En,q̮ybӓa<ҵuoy'ii<ֽ؜W*j sxw\Vo,w7kwnVNw6?vnW{~䫛kԯ]>yλa6eZ ߀秓Ujk<wҌiewl _r[7z/iD˵{" P,Um͟6OKC#|Jn Z()R「we2k\jH);&V7YF—4LBA'Hh I Fd q3%`6{T.U\G@J]x8 6ېumE :'(ɑr1d"#]1d]*f('}! . gO <"RK )`-XYP^PDg7NVO{Mnzn/w@>deUx'+>R[@<G!sZYKEtDqf4`sJ9VL%z0BmKE<&Yyp#=ZLJfFe'azgl .A>.szVqOQqCbOx~kl!Y@)ZidvN-rr"#4;&X7,Ȭh`2!EQhT0hdIH. P9Sj畭Ezbb֮jmWYkAk($ety lP3O92ABY\7=a@sE2!CȊ-( XRR0"h^F}9=',qW4b5R#*VW#.7hA#%V}m\HDΨL d!"1`X !i˹L2jDWbUdVl,* )D 聆[#YBY-W^# @zq\%_g5.V/zzqЋwjęҒIn1:OYFZ=< /"pQ#K;ؠB/>CQYb s+n~|ǒ v)itm0чF EV1F)Q5AiR%6;>P#sGQt;h4SJ{:l,Sf6ـJڠ0( 9\W]@gaCي8xN<&ʀi01]еYzy'nKs3:xndG[iN*'GaڝtjW^0]ZU3'« ×y\.6Z?bʍbnoQZF+P}Ǫ>4=1d $jd!ZIү 0Cl1 Z:p4G23!r m42ی"Zt@P.JK뿒*8VeibJbf.Y+JKa>3!k)J>9tjo &{L>6˨ 㤅 F"|ؘf.Џxb{K +ϑ*A),{P<Cu$ԂAx}%b4shi5Av;e˥SN{VǴQ쏏VFB9DQH\qĊAGo7Ě~&xV*e밊gz_sm I``_20xu,KOe˗om[ic䑺E٬bU䴳FF `#VG#$:E JQ*d*y2t`7Yseԇ}~mg%ߔsMѦB0^F[J%jO4A\h5IZS+܊b_v-ih݅m` ފze`kɎlEz& "(ۙL.ۙL-~-EFrU&WH.'rWUmWJFz*CHCM@팸R+*SURȭ7)GOqgl棿sȷߌ5ɺ]\Vyθ2@xiZ M i4-Ҵ@H-WbHRīvHiZ M i4-Q 2@(GҴ@HiZ M i4-7*%`X0 F`,#`X,[!#ؗ#`X0 F`QR OӺ'DM2×o1+L6⒘|+EYHxQC8!@I,(RieL1!sQ- WM¤S J \R 뙱1$!&q'ܢă:URy(KkCz]e$}SW=GEe壨/b ; |g3C;P"R+a3M%GspxǴZ*vR'C*; )Yh+*Dž trRqjm{A'߸M&q`o>-gyZ4~9[x~/9vxytbE_|9=_4&}dz#`4p~1MJqFIgkt!{dӿ|UCZE&c6&NGj> W뽎U0)0$7yZg$:ߴkLC'sK&Qz!"g?M(AQ[ C&ʶ7JoAOs4t7t:z<1I gn_޷71obuLtrmˉ=ʖ:lϰff"0⡮m*.EQ%Z1'j) Q !6a^Xt$yLqA ₨+d h7 x>x</?<0@XmLRH% ɒ6#Lh8L  ͜HB,BoWOUGJ6c$ nzjs=+R.mp=uT6xIOr`Uu |^F@'b\uV&GiZԟ<;ֲ֥-*;*;Ϭ?AjB*DtA[܃q[HIFB( kPUdT-lO OSIolV|BZ(&#KA6 CUU?9qoXEme!Л3%c# Mm\}wy咾zyi>_:oN0)Z-SpQIIb΄N-1 d!eR" +D:K 6$0𩡵,gՙ(a .()29w NNB2"4j2&=WdĔf'E& y8NAe;_[A} Xab Gs&d=:<{$t8nluF@EYU}6Ƕ"|tRhLKD vhuREQ]A.hue-E1_47]̴3B{}Vnpw 5 v=U>GU[_B50a8xEMj"TqOL3U:ϩS%D kw.`rɕ0:z&|ZD8"s&S>&4 3F+B,z$fQ{E%gm)}=dեۂM:bv꽌'ARIv}$jgjgragjgjJnJM7X^]*œY+x̛c] d*0C{#+5%rъ$|uM1CcEi<(CS=˙Kxi @}Hƅ43<%Bs5cAg56M *+lhRh<Л;R1>t +l[Xo7oR{!KMAB8WoȵDWZTz"NU"-{\l"rrU"r{ U"6:Tִ7= sEDeBHT8\DuOʠJpQҴc x2!ȂLuvyRiXFzH>J9D"R3|P ,Fth08N=2ܜO{hIܹ?kF% $.4{'0.G?㩘"Ns~ {G?\w54>$ln"yV?4gcFc ;eENaޡ{㉛K'{qbꯖ$Z++y|֣,v fr5ika63?hÃt`|v~ȎCFsnpf[k$ѻw˓GB$%`2۞]&ш'_Nq8-$..TfpytiLݣ^÷ OG֕ᚱsbN4MƇG2s1k\j_NM8ч#;=I~ד=qqsO'}nL{yc72bY#p,|6&s|2^nѤWJ6d_}c5h^ю+iʍaWeO>hֺ1lԴU z&~kO#?rۯ?:?Oӯ?>~|px>x4 ~9#Q+ PJM,6ENznJU N-/ (%94#% &&qo|6ݦ(In rU[,~F=D1M=tud^ Ix  :uٞ(DZdn{1 2^2M[n鷟@UFZqFBeךZ V! E\")rsE7uy wc-Ă l|9 ξ y]}{7 Ezj'CW?9C5rEIW'MkY)(^#%fH C FAKJP8r!*\\/p}o?Z:|neD3Oſ itPG]n]cA); jHxɩ_>=ǎ@\P߭/{zZZ"+Wq֤jt_4=S5h_ `O;KK:#Γs]`.l+!!TYᩳÓew@s:QAN%IaQ6I$E(JIMʞ ?vk|TXr0>;~<9auO~>LNڕ1[c<^KhKٻjw1!p/karIuAI +^yFgpF OQ[rE)5%G`(SBHIUR[l#i_ct/{wpZDwFv6w^G6 !&;>nj 7h B!C(E`R$eL^bdmPdm taգxr2T3MPZ!!f*Kdt=@g\ޤ0{Ҝ8\ <~%(fO0˾-ث[^I<3U垮|&˨=K4G[AGj}պͳMRjM'S] ;Qg0Q={SOUB=umd!1(G(Ѐ5sT*)]r;d"3D mA*)*$j1ZSD;b!X'B囉zo: mҶ=.Ҏw#g竣Txl]0~-jeݾ癨TNJIpaҾtʝZhe:D#> NhIBS!L.rʨ38+[q%AM`*{؄2BwWxITVtꨫo}6?d}CV֗NWGO1bZQ|[&PV"q0L *(av?bw'M+/ !_|2~y<Gk-0q)?>LZWYä4,cɌP,Y2uxͼ"71}3u/B]B14t*[vrr>ojvD.xc.foGW;y?>И}EL;3 /˻w&>/T{Sn{e;<w >Z2.wwuWn;z%|t6V(k:!@j\ bYjM 2ӟp88>y|%ؚ`kbfP=QĤѓх"&e. Q*A/EiYdLJayQ9Ьrהq$8{t[G#9;#ak ^$B4'H*Y3yaL!){DxDsy:{`AF-9s^P6gBhRfD-$S #L 4E\D \e`ҬJM. 1d! FҵB#fA/[ N/FjU=-{-?jbn7%:XПӋnVdG슑r0[Lj6:k%&&`Ʉ"}(_*563zBOqpr^V8L{y2}O`xy?Ɉ2~{<__ `^ѯVBi@CȲ2~,ŘY]l 㗆 RY:>; xC1k$VEGB\rƍ*Oy, zd~R򾃹!hcnVfJ\ǔhTo',r:.ީ[R#cINx͸}%^Q"zW~?|aO) ׫oJ?W;ћgP,>dz2#sFb2y1&$F2t^ 쟊N']t|bqIq|Y$Z^61%a+SrIS()=r9RuIE86RQ.~o>LLTG?ʅi?{Sf2F{?zq@zc0k`"]# M?փ*鿗`TЀfU?Lk\Nw @u,OI ,JX2D`J$8( 0~Yd6)\:"+oғ4H`ίƫt:Oa쨩NM>חArh;_^JOCO=Odx^U}">.z -j \ͣCba>fNT {mF 寧A#&f\*kϾY3Qca',ji=gM@ap 5;hh ?xɎy5j R%%SD{hDID= ŤH oˑϑ%o<yMƍsV1dQ1E+=YkF0P4u)ljX=-ʅ{ϫ9ˎ\e疢,'4ƈmVЗ$l OڂQs twkY˴8[f+XA:JJ\0jD#P( P *=WڸHE&":t a(<JBmOh8. `XK,@,*#d,( _$xX7~Ayo*#O E>In*p! N`H#B@\KdkI< =Qp0I~AkO:v4p13>9PR=sQxO.]INJr[(ڜ-rPcnL0&UED~ Cb5*KH Ԃ JjRz\TknlWŸ*+;(e]A;_VöIIJ%>4T6.V54؜;Ι(W:S љ(:q:zJ;zcJjHc0һH WpaA"҅ulI`Ro$*:o;cƌL"^ˈiQhj@o5pU{WLF7AoT.9?VyG{}7UQelFsa9fcv)ЃsK7Or-hUv]>4I줞,$㭏kRbvQ_+ws)kn{Fun#̴yz| kkZ7x}Vꆁ Wb.lVJ`^=c6ư}YW"C5N7#z h~lx.МyV8MHes^)#28/C;& נQN#"jJ%ǴslϭsM;o1ykY\áUkZfxPk5`K T@n #VBhq+kc7TxG(fEwZ$p0@AP(@2AZY}-=V^pPnSW5ĺUV Y0SX)j=d=, 1bʝA)BS`Jt(N;A#8PT":|I4c*0C"JSaPGH}2"G QG](t2/TkB%ҞeXF+ǼwgD oaW?d;~4PtLA.u+}v]$Z\I6=B Pd籶X>NEi0NōFi)m./S E}Vy—@Nc@HFdI!:Hx1uf<Ws<:}mͿJ``VtZ!*T%.7!=w^>2e(&e:K,Ow私g&+聾CɆd$.>o>㕜dr5<j*:x=v[)Z% (|00>[XS;'|,ToWeËel^8(se>]\V{Klo $jדy9ꪙX9wtjr-i-ŠQiI>(7Fgsz5 6\5몵jЃtq2F|>?Fe21%K >٠/1TfP8[5X3^:ןޞp|?=Dpv`\r\;f'篘55LMfj b^N\yȇa=[ol- ן_eo\Ef5ɑ[t2;5A+%+gJV/(t!J~ zʗ,D q?A$΋:ZiQ*= ? ~z34j|Z@>#FH rdH츔HN '0f  KF=((AZP#q#\F䢉.N޵N6I 阪v:UpȾx߻]nw٢̋bU8;D :gX;H<<`, { n0 O vW3'uy~*2"9\ F7,THa@* \%ݠamlwR tQB&h, jIg!oitve/vd@QǮ" k%]q p®Zž+Rsұ'Ȯ =.g®;JP*ܱ'Ȯ$|d3Su|d?}:F-YJ,<z!|3,4U_\̮ɸ뿽~|G;)Maǃ~h]dT[Ƣc\e)C6Pm<y,![nu '1beP;`%s3"%CbaN~°JJaW*:v%{ojLPJu+M2IjlhFㄖX|]>9@UeϗKҟSkR 3mn7Ab54ߛ`cx)h0dYSes Ϗ Q fT(eRK|V``)*h䩰!NP3AwVsfSw:g#3i0͸)NAF)ԲD4y |yR /Vпm0}w,s<yP=\o;gFjW|WીC(f`kΧ"p O=cF@-b>MLj9+8ҰTk%i]v -cwP+LZ%/`$~3 Rs޶zc[]Jj|0QɄpNsuJlAȉR4g$l,QV8d]亹`{%eD;9yeD,U~Q[ň)je &<5@k5Jq(rneZGEdQ0AtbrX+/B"IY$Zgs\ho3%]x>nQ1hM1KDW/Ѷ&4m>;D)>/$e< ߨQ* 1g &A%${/e*[؉:]ծꞖ_X 'q GB!(QB3) .1tf*"]'"8PuJymMaqCYKW=TnG'b>`'3+&hӐ%(iW-_>FDmi[oOT7+Ȳ||}[B@TVq,㠕G9mLp4:ϙ6Gל12&Jqˆ^Fc ,DDꥦҨH0 t˶2_N7塴Ou?s@C[Mo:T@M)TNQύ^4/;Ux>e\]C cMY2rK@E0HJ,T#F0 GxpF1tqK$kd!P:^V20,J)#ZmT{*cE*L< uQc%0&IY,-,xGI4 Aklf wsݣAkr(>AwەV&=}5DКYTizOy6gڛ۸WaHOUNrvƕwo.qh\KG9~"%!qd1$eI F?p hc0QhNƓx!`5p4y:ѳVtK<)x KOBzYS% p%>;/vSຏgeݟ^7D qȒϯ^b9g֢}dnWYkhD+Ĭ&:orocUhgwmu'PԶf e@dKx !^xft|b|#Ojel`0 B{MxNY.?.e_kbOt@!g%(ϛhy9E@K0HKS&,hSI 7+4jƁV-hs:a9JJher儘 P(wY!KrU182)1-l'+!Y9E%!g{4)\HfoB aF M4{JQ*]O%W䧷'_rɠ$B^xooDqixɔ`YaXfy.0u@O0g.2Xq:8=||H/).+tu2=͵iqv8l`1ZBEʲy"o63|S;S^{'=Q)x8  |zw~p\k7%::_] ˕([FcC>G>%[t->7 _opSƗ Id4@ L\v@L\jwj6os=|\̤*U6LЬpI1K>n]_nhi׌"ڰ\tZff379 oxFvaF:WF 0N J2h$rQuV jX@<|x5P:V`%JC6' ))t,DVTVJY#t'Zze|Zq2SUjnmmΦ`GuB׻PI". Mɜ%11(-)/RxYLș*0" /bXB[#mY 5Pfi&xN%2FΎa7!H))ZY&ikU$@8Ri4% Q')4l4 \C %tDҿKd*G b.|V@E^Uxx, dHYTڐh`+u̐ ڠJLBHN$NPwϸGܒ2P2LG贤:xhWKV-wXQܩ 9GӤD71S+kyP4o{+4kk}T?6w^7O'g^,[`A̹?g܂% -I,& ψW׍ڑ}uÈ8X#& kE3Xf|Wc ,[;`G?Q=#=.F>y h|7ՙzc_W,N0:մ&t4{_7}o7_{w\w/ݛi cq\GIq/$ۃC׋0jkho>4XЮͧM*h׌{}$wW[+[kx~A?n"ɑSrrkV/AWlbZ[.'qEMwש]-xjb C!{Uب#;Iad׮'I 9#|&6p^1  =`\vz68E^UsϧC4:xF )Fj gZ,fcEV*Ns"b.ݹN+:}5qgmd_>;_FHC/"WaU8;+g,y_P)}JIn#xRʑd h}T*4EuO3܎3^<{?p|bLFaJ>@IxtR1 ¡.0vk>X#']H_Ҋz&=k|q2qEv_sk[n9xءGBG%Q:+ٗ D2BDp_ z/<>{-&Ϟ"/4Tx>]C00҂CJ8^rpq~x^Ԓ:sGfW'5eQVF$ k>r85zr`'Og`.ԹPBh_"mi}'ҽ8tto(tI^LӹݻJ1j yfNzKՆ^g`>E.t47xDUM^&ף׍W[G=Zt b4_Ng?ǫ/G̭!Ƿ^~}Ƞet9PҜ J.rˑf p-hv(CB&FH PK U]*![LBK ^dR=څUF&dg*ueͽɖ̬&G,Gpt29nc!!cF p$]8Qⱸ&.o_ <"%NIcmEvFQ}JGbk^"g{p`RydV[])b5.GmZ*p-[ZK[Yi)$z0hI<&Mʟ ='_V&c )r.h)m99ZJl%MCQ| B mA"A!Y̺,CtO,۲n; iI<긯v-KmInLbtJJN(E%iO<1J{c՘AvHf*.IC&d%iА!=1jFMKC"2Y$-.k~~YH=%dJYD$>ӮOm@9&3* S6T4$T"][oG+DGꮾ C6XٜyH&+H| OII$uI51bChzꪯ1-PS;Xd8kTEY )D#*ٓhY(C3q:Te}uvJnkc]_rZ lk;P$|$}}vV'zxvq_agPvl=ۈ=c/ζ$>~lfN7Hۑ~)!{h}c$`hsu&hRa}=;=oߣV~/0 B1R_(ER)aJ+A+x0KQ<BJ".{S TeB ZŜĹ_.S-OiΙc^yڑu25 kVLiozu:>^VKt4W|\sJ "2؜MN]YBSd;"5jlstI`.b#iGnY/69<٨QpzW4-9D`Kǔvsn|g-^ 7ތGUw^1t<[ê=rJo2vsEtx~y:9~f ^r1^ֹcjlz~ =CF ;|\MAxVbnx8OWyۯ 鷿lUwfeKEtHWYɲ?qɧW"S\p"])R8U#?&yӾE(_nPw3$a߷3{<=ux{?nP&8lW/+5A,ԍ.0 cgrMmJw:HI 7sHq tI65x:&H}Q; {(QUGqǟg ИLф(P ` HP z_rNE`E4!(lmm|HYCL J9̈́ ¡29M9I:0ԕ8`Z>O3onKb$@K RSkWgkS&$XX+rsY˦(geCx fEŰ2$^H&9Vv===ۏ6z%=H=Oa^,x^G ꀆab}sޘdM, FᛠoԕD]?el\XO,!=hw9*SE TB$Y0eRE%V;J tXq}:f7u;CVq舟aA=F fY~_Jm<;| 걍].j68םtHԄћF:$@A6ˍuwuZn36I%8屔Zb>xG2sRduhuۑAkJiD:b/`1:J!w"C Uݙ8#g>e2@z=+6'6ژܻ$FO*1idtkKATJ_h)S.^ԏq3wqWLk4 )j 5KpnH>ޭQ݆j;ھY"Y<`ˠF6g-te+FjUl)#Iؼ00#H&C) Tɬy'ש7=MOoy4˓ɒl^%s%hRPZfP>Īm*|RDQQ)S:a]G r{tp|dzX& .Yo!Uv&h11s1B*Y@w{ɱ7J*:ؠ] v u#PԵgwv !;K].߂h|qb J6P'G-M0lР]e/_/ gff|˃:U 2WsrQ*BJAǟ?o) Our?_oY# j:蠞"ŀP|J/,cRᗤYk2+;ѕTjR 9(tҺRZ&_OYܠdR|]B,ws]Fa-Z/;1%a+SrIS()jTP0 T ^+dS5zxuvNz1O]4e}y'/=O/%F^2sܾͭ\xI|_A_^FPGb?s#ξeń&}FkCAڨb%%PBigRًA9ɣhtZ;+Mo:1zu֞<;;4X ekW.7 ܅'C\jv> R6OꙍH}OY#,EQhJ"("`BC U Zyhou¸G$hz"(,-eLEtD+wC BvEAb=9m5N3y{nMրp۽aZHgܨWIgSZj ʘ4:X_"|~wV=5YUa;-@}%ĵXQjE5 n|G =㉏ ۶Ӂs*$J$3Xvcl|JۀL<,ϼC A<#FnB"J<˨%`1dɠm[EM-ZvD@Eǖ 9QG6-̫14mqpJ!Y%~.ZܯDޕqd?3MKuEI wND$YwWݼ-"bIľUի_W:ܞ^* 5ab&k(<~|{%l!UY?o\FOx ~VTwV7^.!)J'ܘ/n|ǖuV-_]dpC%e-qYn5C6_,CWa`ŴLt+:Z*A[wrYARV=HjׯR_H' L+j"W|TO|?ַ ׹v}|7?7>b>W߽v akT4LJJ-Ѣ2yv2ch ѿ(=94ykk=k)sj[cR;ĬOEGjdrVk/;^dt XejO51)cu8/4P\-\ aG@Ίbm ō~s{wH4ǎV nJ>__3eOķ k-h뺯K7V:X;I]Uq*(j d%SJ^bA6:E+Nh(QvwVҥRHQ8PROEuXx' J.Dc8II"QHYu2LwZ D佖豉hj@."| OklnRfmCemZb'sȏsx-פSTy8֚o&qͮhӽaQ:Gtm"WVLԕ& Bʯ Ȁ]L25(~:c:2,ͯϮޯ<}8GJg\^owק<̯ k>MGNk+Lw覦1ǛʯwOz&36:$M}Vsku=0'epxV ڕix`,(U2xi23Ń.tmIݹg{OA#Q{㥥\Q#'{/G1rh0D!+f809Z&)YWHocY j5,|G>a}n ob6 pbv&c+H:R"7hnRZa׈QR!Q0R`D`"As GptX2P,H'Eh,c NT\#!kR G%Lek(c:q޵ 8/= us4r'l#66X"uHGpꠤR*"*¢&-g񎤷oDR}r4Wp9-o(سg`m+ݧNwP:{@epT_j4I$EūUlNaed ᰅbG Q3XA`u#I4I㸦6*.41Q:lvZM*aC8b^o/? F-5O!E ^%i%5.> c4g%R XQ7/ q 1I(tbbәmEZ, abɱY4ih(l*opK8ʊpgYvbGX)Ya/GXJ'mZM@H1 N9ťoW12#O Ɯ *0gl*3 ^v;KcǃyU<>_wo9V^@3&(P;-;SB$ A(g7ˊT`g(-h \V gcEgED0BB2Ѝg($^2Xo#,iP΁&MFbu:f#goSm&ufmqgEb8>ڧ2: v,o┶4JRPŦa6If<$ ,cr0SAZy?j40gsLM[HE ?0+8(F0RqBwn]VEQ/钫1,"KiV"eY.~oc^OEL7i/`̕j_D&9_|w*Q׎T;睩oDi5@8`r_3sf[)̣bO(y2,ߍi>`cRGj_VQbtJ70ݢV]; hxE w5jĢ]7xV7_I%vGS'WuyuL˯rdk3cϤ.dLRsJaCPQ=DquF Y#6 :9 xEy/eU5&5EC*eqTcE6gƽuHFhNPb16iwCP.r6HdWr=lm8)|Lj<^pا|:2I<$~M5 V[a:Ŭ]cwai)Qgs<°B2z"Ѱ^U[s*ޡfָPL٨Jhҙ)aRJcD|!R&R/5e-/EyHbLDɥH% NoL*Ud9`Y~U2ű֬. ._,G Չ'[cCyCjLEJCTkjUk2;uuJ{`0S||ww\0sosah8v֖RKJ1t`DK5gOS+hԊ}dAי&fz$KFRJ[@20DIӍ3Ҍ3_/KPT{GPɴv%s^21$ڲP'=}$/, /,/,ꃐZ`6U1DXaέyD Q9# y kofxq0o,}dwuή/9~l :_eFeD| 5U)=" 5 AɈe~Ka kG[i#(bF1ZleF͝QFG J@Q1,%! j`QEr^F ~7KN:muy()V}{~4zwrټ_oS157ZΘ" |SidKh|W :kdH-otdM/}ˍ\kdDG#cH:#QeeD%q2”*yDp3ʎL \%jyW6I•cJct =a36ga=ss5V0#*)dFR˔ߤ,:;)Ħ0΃uwc|T2ħ]Xew`-m¾<e,!]Znu '1beP;/L |LD" 'jU J[aaA :D&GW+^J)•fH2mv0D,/G/X10*'wmHgwidi 8d23vrs-p3rݯ-ɒjJ bGM,Y_X. ;0aVM9`T kp"$cI 9y 8@8;\jmUtyCzzNp>wg{V ZFKT4!(Et}Q[]D0KMK(.)&dn pNbeJV3_H(_ĩL<OEɮ}bΡPvLIE 3_FQB+^=7$ ;GY]{H~˺L*\oHp߭\ bWdko2櫉2*~dgwiQޤ5{C3ߞ4g.#ڂS Dž-ӍSXn|u/quA'9=vT*NiAr 7F"Yσ,@IA=vǮڔfǠ֫ c\y7ڝW7NEprb0ߗ"\{0' ' UT(*RLN!1mrD6"b$7G"W=~404GxWJ{d@L?sdn<).AoVWƟƓxfד4SLdc7LaI.(Kok\~& ӳ;&ɗ"G&hVp1nƼ/^&z\5U>jk]3ʒ"hrYfj|D{y+uc;03<"A xP]AIDH" ::b(GߞGPKb-gըN1bBXmLRH% I1(Z !SЂE'R^+B/@NqLϷ~mM٭S6){9J91ƀMq˪}Nנ6k( WI]Jܠ YJV$%#b1kyQGo=e +QWpXᤵuV:5F2rl"ajAKY5G &W!sG4Kڨ3%JyK$k#! *V'y9 *8nDPͥ`A -Cȹt{Cq>ipwdϔ=4T#,Iz74MĖR.xoj"2v4:!F3K 2ɐrr)Z"G}% )c$WP[H:I@24CoWZ#aW!H))Rh(H(@Tq"( 'E&  ISlbK+sڄPN GԿsd*zJUt!">".V%bI3Z/*W(Gccq]MG!V Z^G bFZ'UTZIp"(;#E {^sΔF_V Ơئ9H~$us6mmEk]5v!Mf]]kZ=v Kу!P 8Fp&S>&) RhD} )Zta^;Kat*)bo0̻@}8eoӃ^o8ɕ^^}n~/ߖeÁl}z㊻;l4gX0yk<z3v5I+X< db\,%X/wiwsxA= ͵$qagJq JX,PpK\cIITp6nN~ֻO,;Ж.)!coO8ѦhbD^vcĿ߫.K}JqGX6|ۼ\{ZMעX`}6A;܍.8n-@"`-b5wH.8T`\`]`\]Uඣ!XN0oJ{ ΙQ"%D#"=)JPvsYHt~N75kx-+-)n:rN0QSal2Z\ eԑ¢)T) 8)sŶ}$v bQoє[AKnrDQd-$qgObK?4(YU7h5He; +@Eg,sr߹ I "7L"Q :r(`CLu6y1RtWhQsG, Q Pc(bȝ՝R:hσ #^7yp|?[?-Y{s &؛޴Vl 8CuEkLqQf+W*iqmuh%%!+9j4PoXaFjTQ0 T|:OPā)feZh9ZU]\M, pB2'U:XN\pTY?6г2~,To޹lpf?ګ$pQЀd|ԁQ-rITśwS+Nq KyWoz+cճy ?VfWutr:{l!k1gcLi=`.v@Ke:矝r1X>A^mIƞXYS7#Ec7ˋ52b尒Q8yN>,=֟lѤWF6:dSMc5Elюs \>?GU&WeK!=am"ǰQԆF%e]S\ퟯzOOz}L>~óϟ7I[j+8ق+]dz_nѵhko5h~MM >V m/֊~~|Z~:ȇ-&j .C/̿d#W=?kTIa.? ш!ZˀclԑVĭ$XIB2 |r9G1+Eez6Aswz}"><˖F8T#%MB%M>qo| ds,EALsu:Uvk-Q]gSۜ|!k^Dªua\wk9ISÅ[{[q]I1ֆ8ɕIWpkdalHENghb UM.ŷȴ5E)96DB-M 1A x|uui턧5,o F]gn0]F/f!- P VHkvqZNgG*(7gbgҦZ O>йʓGªvzp*x6}ڎ`wN!_ VQ܊\Ns~}vOlG%3A _TIHY)jX!g;|ǣ_=z<'% #n~YXCgUW1Ysz_'y1-8]쾣ҷL]~T}IƏf%V!/j^ګ#v;;={Wa2z ^^6X/9s:V>TZBR1! (g\bq@n| t yp 9;j|LziQNwUp#[3UWx*uWy21l ]NS}@ `4Jy҅毃~kaƣ􍺝yA;|T['1`B)J&O; ~XMZsYk%S[28^('I(VHþ($rIQ_]eF #$$8pFBb%#1S=X҄Ij$f.!ͬ]#ޞx=wLl\^ę"b7q| #)Xb KtDp$85D;nMσ*Ccчٸc_}H2C>< x5Y#›'fL[m%Eu|wZgJ~sic`e3 A0+`ܱrA%B2:﨏z:8(8bf( #X0*#(^J˂eTEK֩2E^yR((DX-"ɨa"9A3]й|LW{̪C0D 7_s鸾?ׅ50'o n9Ыڌ^LAkxi*F> VOJ0`2i/`j2:dY!P- 0-JPBM^賔rtwٽV@@v/ZY0@V^D#Sr$8+ LRRynAGsŽY$fyPF,rH# be}0z)# v0@H4&"f#gNj;ck\q=};"@O/i{=6mf;%CZx `;%2"iR ">(I(+D7YKp3"اv4!) HnȈb΂wDðq$g#gPWSAu߀0'XquyCܑ\&Zt&*ӛMe2Qw[zlW(3`T:\?zu5*|-|`~De:${jmT|RAez Ͽn /.z(V:KÚyT6guYr4~7On{In*+:6NM[t+[՟Ӯ4G8JcF1 )`)$Ť=V 9 Ƀgv b(ŽR܁`A-PJt8X{`Ebbv+ 阂wYʼnHX٦|db\\/}Lkob6kk]-{SLt -ӵ_9J9D(B!(ZXJL!0"QL +̂tB `Q2@ܦpo Yc֎J]y\ȠǣEݻ_pWA_=6S \J0%L]oAP%: \9>wKyQnˎVgkl[nvݻΗzNF&)4QW\dm:z&'@u1uEzsdK6cYCy5e-ےwiyMr]OJTcu(|c YwII=䬧7|ѱ|Vmxisfe\^+k)&]65%a4UTsc%bD_ٶBpTxG`*̰4N+!ʓH&9 ~j809Pmdר]s`٨X9wK]L@A<՞{zLrCx◺aY?9~ۇwo޾~~x}~݇3Lٛysku+0Jt@IA0 o~C\CxT]9g,rs|A;t J?~or޽]]U#[f]:0)_A̯+W5]TT}JzW*DtnxPw16H[ywIxv34j@>ZnEψΗR`i$P v o;I+*zNGՃ6}g\GP86q#җ۽4Td}ɥRx˒LR_CR^H(QG gz0ٍ4*Z)̺VqB䫽`ǽ9?=?wתTjZA;\O-`%n3/(Ab4:s9i!^OtL]oi=1O:YKC<^xp>o 4]ЋP[y Cm塶P[y ‘xUɐN(:X8HEʿxΎr!Ih?KIRzrHڦb1Q)mNFRjA5263zdlUaa)Rc, MO+3^aɻINV7qx0>r֒ ר gb3H>8eTS`L-)wMQ)bAdg'% & Ae,A(tIDK9#v8>JK.殠vޱ)jƨZI*g GN,E'L% @'%B&P$lHKdD#dX~b4*+c2@‡fNu7f\SX+جl("z"Jyl ?ǤJAʩ.ZfTD,)CHJRI!EF+!YTLH!SlL1'Ђ B3snb2E]u6%lrAҖEm#^(YW\^z<@bmaޱ)x6-@XúbQ]7SJ! m {ֿ7GUiQ. F:$v6wNG6`PI [wt︼ڷؖ7}Q{ (JW|lQiS aJJPZ#-.L’),CUPRvPھ2GUJd.tݞ@2NgK{6\9yy.HR.^/͛:/=fw0 P]f'0 U$fW :+Z|VIUȤUd]ެk=嫽?MqW~}<~w+GkV8\0wؾYkv]Jk<.zˌeT'^繒Jݲ9g|=3۟ý'sAC=Fq@笑ϚB}W<3\糏oFϏ|7X,nPNG Fq\;AlYn>DžP!dtIVHHXJ*yR& @H(ծ6?gk^Th`^ۙguv>i܍Ў/:n}DGn { :lMuQt:Mbic^g9O! ݅L 5A~XIzN>J˔ iQ6IG5x \62ŷM+ZA{6H;.M+4;>[.~u q.98ܝY[Vq\99~Hh|%6#va Ĵ[3\UEE")z Ix݅Z] -Ow}/D?X=S@R%ySgjom@RTCTѝZwH,% 3B%[viݡ- v'R% eSZ TIQ(%$u0Rd52)-ӱTG*$4i'F\~av-zWYL?sюw'as1õ]7lћ 5wMͿ[(D}4ǭO:T}@7ba +vCbFɝUZW-Zl:C|hbrZ咤XHrHF]?)R8`ÂqxցO<~zE/&|sIy9ǚX-:(vx) ) ;a D "Ծ<0 p"֓忹h'30>Pmp6#jJ PH_QHa@BfW||yP=<~Q/wz3_fҶ">o4xd*Fi8h#Ygxab" 7(J$+% /=+g@dž<)xU J'e-@,XG %8)Jȕ;a/.H t 83r<|G.HrRe,b {4Aa11:bY Ql ~`=O-Vٛġ,Rڳ;' # ^˗ݤϛ>rPT18Zg}Np e'TW H>oR{:|vjkV(S =_x> &㓳ed UqY)H"`BQ\A% 1p#AVs6ցэm"lf 8Ikb6>/b,#yFa-Zۘ0 9!JڇF(HP x ɟs6jg:ix6ZMMoOJ&4?өY9ّGO~%f^xl~#nȓƓg?=eZV rO_ᇋ?#S 㾕O<"32^oiLc6nG˹thZ;/mhq&9).4"GwGfz~۟?ɔ}H Gd}4ȧ_?qf2Q=FK 4U)4֤ǵQV;k#-#/N2Wq)ȓNE,2e˙zM2i'/k_P9'lίXs'nBu>ho_N0ʒ#_h"3Ffbiwa>uEKp)[!Bԉt" Z r֓tCI|[A(_FPeK_l{5o61=9ƚ@(6PIM&%QiL A5Yekz-of}\([8'} F!/yeqOAងUN AEag+>fo]p#JQ RR]0w&HTVasX9 ;T5U-LRV)*/}jiT"s,LQxbj/KLDlDv޵-_Ae[KVeDy#v=/pԕ]77 EX51ݙNVezt]'c 6kT"sR~+sqi^e6/0kڤnѷZrJS/Twm_^OIBǯC~\C+D2BoDN[l* RMcE9_| 9`d4 ?5 jY6ٲ ua4%9q43^ CNXCLFC,'㔞WǢ4=Q:{"& B1Եe6fοTIG)1:#A_!(BS9bD7ʝwV~ 1vH1V.Q6Z&jy@Nc}= !9Tf?vYub~`~kZunƶ,AtLB-6O%|::~z뗺|d4D0;M)8]P:ueT桛'4ry[kռv#^S `r hMLZ6:)sL:Xt".i+-Ewhmrs|UWͿ}CuH/dA )oGc/d˂ls.W>~Oɺл]kC']_?-fݵm/YKopƥjuۖLT1IWIFKX$ZtJeW(o#+6s4pU͵x,pU%8tV: F!([IzpBO?-k?OR~#M.iZfĽ#~M'y*@:ξr]j "tŸKu9wՉ w[K-$_`p)Nh~~vqZ-?o*}_9 v/o\B#c]QYuE"&;޼yjwcesJ?{;V8{<&~ݘgm7mT#/F"O~ͦ[?138l(b-_wn%]M"vq;N?|NU+I˒﷯9~ZLgoҥ2ѽMkj} f'g?^f\$SΜx"JH>vD5*F/7Dq5Ls{[އ|E=^Әǟ0Z''n|/ixbÅ9ys'ެְKtKlDe&ԣ_.FRh|'%jΖǩiYsEu㍉PPBH}˟6QEip?5eߎ~[Fb=kCF Uf({P ?@vF QmH{KIy%#J16 Z+mؐo+Ye^Ik+9Ა^2bp.YAIM)+(4޽̜ȸH$Z-[y!Twu7*;z[.1K_>w6,#5;PNn}<1] h׎erVR(Tv& h GK  NK$ u]Td$"* DUx@pV.J~p՛?w?Axy@ᰄ |sRKJxyPDhBVrh&xW$4铆-"Uoٟ"߫k,Zs goP*h߬Wb:ڏ{:켋}&a֘J#--!y5MCTゃS85liMVДL0mvEnM2rȈ>C!ct[ڈ9ګ(HrHR!8 bHW iLnJ>663灒wB! W宅ОV»CNCqLXMIIHqJ'3g86s)ېNY]((.fSHFR*ʘ,ThN:ep b땫VP݌ii& +v(إ'ٷ6(Yp||wb\Elv>׮%i~  0%w>r@|tUL>zT$U2;\HR)2Z3m%Q:,Q*Z*)T\PDǜj/_Z#c3sٌV+Rc, eŧʌL{.ӎsͿ㪪|C|HJHYd\>A)SINk:2y"&]oCM|RB-F BE%Z+ \g+y1b73~IH19H+]cvj .&e GΒt^Kf)&^hd*Q(Hl.Zc #3df8:#c Ѫ(Z&Q7f܏}R^x(?vDDmquD9IIٟcRdtʩ`-3QK VA)A ݺ##)E8R'U*΄"1V#g P{6gcDlfYe~ԁqqnz|HsͼdW\qQ8z9IF -Qbyv/6yBɢBE3$\| \<4e;ԸN˩q_.m;پN`[ 7I̐^?OzM ~s۠띵OrZ ;ϾY(N`'A#Ni^Ɓ> 5C#;ݱ'";ҞF!bh-B L*UP1X5lecI% TЉJSU#-)N4(&Y|;{ҙL!/5&y>Y𱻘M~; Bk2g'<~ljn1XcY}V B̹) :+ bĴb Z%/TʄB  FcuZQ0oJ,'Ot}ُ́NQݝ^Yayː>Sdw!Ceto_R"s~=\1tj2S:Пx`2>>5;ҁR=6L{AqҰ37^l6QS.E%:p.v`1R+\RU PU bN֙C#PDIҘ` j`<2KjX x;~-bp1;2/.7J91u=lq3o%jd k]m]/ E ʆF'MD;9/});=yW'y !JQ)T%֊̹?oY/ glK:v+۾u{Nϵ=2g,dl|:Z}i#JtnO$dz]P:ueFa0Ӑ`uM/f}Q 2$ tNf\9K8"pBZ#% 3q/`.{11id5=w}]|[)3CYCXݶ{&3]Ţzۼc/yxb5Vd]Nڣ2HQW%Q 7+ mbLT" ŢB@_EK*f?viZm<[/ hz?g{ ԖAp>D}G̗ 1ߤ[l LּqbV{>wJ@uF@k$DIK(e%n3hE  $kOF*`X`]CR%QY|;awUqk|뫝auWZ@1bZxD`KG#YͥĬ:hk4GIhuɀ!9rIRx_P'9xtH;v^,-\ZҔdVb:_LG$XLP R  )Ԫ};a-Ȥ)$P}y0 "1 \^hY3>P0i0JX PH[+_D$0 ٢Gk||wt[^+v;>JrnE|[} xT2>RDHhz&& 1zrO2u[)}I#Pkˑ&Y;CR)s-2I%V\ۘEJ"ECcMRC@7:u 1Ϸ<3͍3LxK9 j뒴Qk՞c&j|蘧[iayr-ިt~C'`aC)3`Ċq`BPuRߘ_mwB?GXgU92ф␔@5m`@6/5AIB7(%4nT B`FB G >+`)*hib|b5!TG}R76N!˧_pLa- ɻK\aƞ|}29}t@&QRPNt`)($v0^"nVAY!I FAT1#QHDya"Պ{ΕStB pS FG CbDR2IGQF81+ ]H[T4*jAt#גJe=N(R\ɿ d~~???Mʁ-|)^,N'oK?WA;Jv~+Ɠ_Z "wx]_*SU â "՗jn[+NIc_ӾtLK;X$_:Irv4()1)TQd6(w3ewa$A 0gyE;}㏯t1!CٔA6υerZs}"fuaƂD\_dqAɣj) D_3γ Ǜ_A>LC3`p# W;:}ȼ Tz]2Ù ,_{noPW D9]ڡXtGe6Xx]lT0$}oRwO+AV5&_Q9MԨ"ΧrAC^5XTiveʐO عvf32= E`a)C(1>HA {0WrFDD`QL4 nny1y5t,# ͇W;>g7JkRXŐFIQ=T^X!`h R;`))d4" SAicD6*+HXK6`BL¨5*P@H0A8zin*p!ZUB':$mJEdA2PRLx`01i~ېG}Wkem]Av|F˒{]s4ZL; 30[:Rjsfy:-2)3c%J9;&j}&߫ 0P0vI?U逰ZPD Z{1[,} *ۥl&i l}:=\nʩv:?-mimIJ<8O˿?M*j$X9:Ι(W:S љ(:uK%dфLJtw9εD(`w8k ‚E Qٓ HTu2LwZ D佖tԀ'߸Scpܿ+[H/yRc}˻JQ=js|g/QDexTsvaQgeUxN;ç\#MRvRFs1x T:/K7b\޼ugNW{o|U0Ӻ橒p0XP_j/1UҗwTM8"6g; &S~G1l޴Gh*vH[ͅzcy=6}D݀nu3sYr(#h<:[}3:Ǜ:<-ӈ}D2+Jٵ~R?j&h׎Mz=5̬r>dӗ4OK ^+k)'A}0A*1j k371ogOA70Y8^dYv&̖Lқyd|l(x.Мy'vKQtǣ8il8DɈL,Fe9!KIHT7`P🶈`+.9]dZsi6Ɓ Lo40&j-)3`*DPzÍaİJ((8@k\-Zt;"E 4C0'Tfi%SK Ayi *e 0~Bhꪉ- R"JQal48)wƘVqGlj`A "i ":5c*0#C1E 0ΡN\0m1D.0 jMDUHΨ\yK_ P4d;Q~gj} zfYEԗ[ D좟NIgRD00 $I7ԧB:ia66IA%`= g ZbnKټffs,@} LVAUxiJV뗜w`IjI s>? EWNSA~ǠRMJ̉ee~8s0w/_z|?{ucLᏇ^=U;qp)k׆v- Fs M󦚆ꦩb4|×h&_yN7ća}As J_.>?QyD}kq[VM{IJtA k9tOQ &;(!4gXh#j+Ix>IШ (rk-!1"R l" 8I,2$ØuvCo8Ԕ.n:Eĩ3R'12JeD.> X1aun)}wȹN] ؜gM\ ߳uNw[^zrӬZչluG+';UqC )ƒlmpO#L`b?핞9ȳ)=p6^Xcqo`|5`==`zh,{=b*b&A. d%YU 9+8g_ _hI_EZr_`AL6 κ/[.Ge'OI'Ϯv {S~UYtgeՓ_O/Nzzy~3G%@YxQAghuVl.@=}jッ׆,9diBj5\oV+LWUb;tJv tE:cXMj]%*ժ-t5GX:]%HWD[[DW>u]%ZvJI:ztRMtkLh1uJ( GTh"g*-t*t(1ҕ@r:YU,CW nvh]%HWMӋqﴌ)=P:CG*dTq`X˿/CT @qڨS ,y&2O9|vgJ1Sot ~gK#[kd9bȮ3K]Z2w:=@~"*/%W2:?ZT#&6<y,![nu '1bePL_)(G{WPᔿf9 $*4!~X@O'p0.zSox7:CE}1S_1:K9nug3)ݴYByg=F<5AS qXrOX3y8|`!a.ƃ~`"֢RL2"~^~Ex + /\Mװ>]FʯUCڰ RhRwC2_ J0uG gϲfٍ{w/*ayy%6"d:<6Qb})Q1[*i@ioE{U` (փ$^R8MD%S!ubP\iٳ0 }+61{PeFF?g/K昒wasYY&jMIZ%h{ͣR̖~rz^>)fEFULFU\]W$WcT4HɲDzLp5m ]ZYv.9txuZ 0[ ]Z ڒj(wm#@W=&D "Jk ]%Zd*5L挴\BW BNW ]=BDa[DW +KPk]!]1ZDW neZUBIqGW8RV={W j ]ZUB9+A%Et⭡+(c-Jh%uJ(ug >F0[%ٻ޸rWA6⇀fEl?aɒ#~sؒe+qQu,:us3sӱCHE }KvkK-R6vag&Mfr!xɵd2c 6"d^n6)i},0ZOI4Z12آԒ9j_]p .$hMA7K>GJ\[ ٘ՄKf-t5Z9t(Wzt<w}m 7j$UWϑس3k ՄW"s&C{HBW=ǰ"':]Mu5pt5QFHW~nv++>7Ű> (%gHW`,Zls?Sn0Gk; /R#>sw-֐==w SwNL< [3M|FlNHM|;[,TT;p NSʳ;*n{@wM^! sTgS?:?ִe}Ypjw7&JST !5-͊Ov5'Z>] HW)K(OWv=t5gh{=|7!)]iJ9D?ttdT<=]S~>l `TWq5t5Y ]M:]MAYOzx:Iwo#J5Gxˇիo'O /дu}oqw{ e.,]%ہo?/\NFԷn_1o7hE>\^_ V>]]n~m%"Nm?*?<>⃣x_\obs6,_LOozq_Nǽf^|G1O.oi5G=8m_=o_~C_ɩ-qAy@}"jO('t,<<-j';w͟3O>0>a?'oлyts˾iv_ `_lԳ8bn|t- 5[ȗޙҩ8O1O1.O뇫HF[/7%/.z{&nl_lNZ#\ގL՘D÷='))-̧_$&)wI[ ɖ\bw]H:gr3+VB.ڜ6L>1v::4R|x8_[Їi06Qj! Xd) VS$ԇ6{Ghbzo%WZ`$bdь 7Qrr9IX]]Jw 乧R,!R[·!nvHdWN҉s7f9r#b! &3qX 9q1ZZ\MiԒ? hvL- ]**A2 l3WM!:͖J*0L|v 4fƈa4Bf/`}@-MiJDn@#L'|~4&bd8tTOd %Sc)_*GPJr>=7!МUe-<"|gꍂ 0J%vDr#Q8(' Y,Oy dc9WTyqC hA}@ԋyrm/ĥi&;fcEUN=: !pBK;vۙŦ~ͼoO_?~oUpUz nm&!͈ G 1=*&$tdw%stKٷ6Xg OPDd2X (x`QBą򠷒!HDNkk+fʇ8˽#`fK`9`~OG3}fHHٹux$ ;< c}`QՅ*YȩՏoMG}^ y' mG5jee2 v"}M)~11"d&KR Vl웷Xk3tdDHcc.)d ^ LɣB- ā !c@+$lLCQ{X,!ⶦ3*Z.fcvl 䁈::A bA *5hI%#F^Z /pFDMV 梣% CzZ/`y;Ϸ9@˓M,|O7e˓~˹~V&Y%AK@H7b]6sٸS@H3wwG(xljY:Z-ݚk9&f癐FhQAiF_-$7Ufmj=n(!/{I˩4dsCJ`j?6l۠z6?m-W[XJ4Co B; 6*\~@q`[n}kovl&eޔUjnG|_Ϯ%Z]@|XMͧs??zy}~&,/n266?/N^|I60G | ]|˻o_wu|t|WۭGyEhc>mܶhk[ k@]?čpl鿞@ ɭ4^hړhCM>gvH}@R> H}@R> H}@R> H}@R> H}@R> H}@Rg9vp\pҲhfI}@R> H}@R> H}@R> H}@R> H}@R> H}@R>gVk}@@Iz> 0H}@R> H}@R> H}@R> H}@R> H}@R> H}@R(š|@zҫ9> H}@R> H}@R> H}@R> H}@R> H}@R> H}@Rs}Y臟V~wkv/vzG(ed[z/pYm hn[(IԶlK =[]0݉C ?&Z>x()]=C;6,+&7UA#dui1H+A}yNRm8>SUAK 6\IjWeBv:B\)F,)5,P.f/[Ptq*ְ8F\i˜.1;'uncmS!8ou7¥]8r&b'c4R%q`&bv>w!B2k ,1upͫ~[u_ste _5WlOuF-i,Jp+1dqb+־[ۿFն,Ͻ뷂_CVp(Vx'|PKڰ N" ~Uo6],/Pc=!7C&L0e0̈́%D JU$RcZjt} Pt|>2VP*W(ևDqe$\Y BKgB\'n_"z W;CM<ɻ=T0JJIpvpoSԋ 6\Q6IJBLw:B\SKP +M bpjm;@زWǃ+.i[U @%\Zڎ+T)D#ĕTh[@d\\KBw:B\Iɵ .UH1 ܻ)>Wv:F\)Q  . *U PnATicĕ&|(gc04¼5[?+i wiQ}2Xe4ϙAk̕qMEdr UY,,۴Vm˝OiqjY(^ջ^{2Ugw lL6s+ᬪ ] Ul+g5rS2'U\@l奷UfL`^'0)I|1C&LG÷CT`QN_ʕgV~`]pvUHABZ+km)Ww]Lj+%}C/0p)\\JղB]+Ģn$znjn*ypwpoSe V\\iK5WG+)W%yW XpR P0 jv\J;\!悋pU9 Zv\JIH#ĕe'm<W(Xbpr,W JjCL rBB[!k2kV@`\\JQT;\Ic`vR%n'*gfCakeVPC oIP*gբZvTnxВzpP(W(WRpjUKJu8#,-XA|7Pi=PF}92O,z3 Prw,]&ПwS{l|L}ZZ0{'bpr)/WPKWG+fS \`d1B+TTq*McT SΊ\C Jk:\!$a `Swr)W3v\ʶ-REpTvl1B`{WRq [ھNS[Cqۖ2Cdv1Ok_lڷ|CJVnK"7ة-])z]IghQ)Ւk&K S sJP.+ UZEgY!6 B:N_  4F 2K~Lb1 NWRjc\z_?~﫟&p&);yJT}uLPW?sL+^_GxL/qx߫s[>l؛_SXneҿ2́)>'Kji$H/Ɯjpy~}K;!S{ʭ5Jl 4;-%57%ԅj[E_ҊWqELHg_z.ɱU?O+5ەf]5UVfOs/y~>Cm t\KJq ~TsՎUTP^ P{Y\V Pl=PFU}9O,z; T3v ~vɁqCJZ6f+jߢ6iAbpr jmڎ+Pipub Mw{Mw{MRλ:F\qU +k@$PWǃ+ \`h1BZ+T~ TnO puY\ <9urR]\_wTqB*ꤩ޻@Naͦq<vw5[ΰh?\MߧWo߾jV͌躞 O6 1aYpnWRhlo 3!NdБ)"(Ϛz8eEΌ{ϲen\ڌ`7^!O_{߯~ܠ{K$BnZpu)a藋,9 5u?r4^ yգ7NyzQwa}b_N&!re1DV>Wy%02i(sWsuJ|v0FN6^^UHLc-n,s|@n<=. tQs~s|gn%dn(Er?.">}%7yRr7)٨ON uEBD`irLd !9Ke(%E);9::IÅ^:8ybKl"}I69#AlTSsbr{`9XX.DzOܖ | ͭu9`jCYsT!kַ^L&$\XGϥ"3:4&vހñIgi>\X'7d|+\b7 v 5o'z6w[9wKEvA!%vQu G\o5f^ϷWɷ4۶xq@1qܛ2Kq^(ߒ?NSكuҵ1xF-LTI {oZ |LCG0X^,^S2DŽSLQ4}`ۇz߇&cUZ.9Ls,S1#~"CAy;CBuBٚݴwtPN͜]6ӼzjihPZ O\`td' <DCfTtʻ޾f:h8/yu0x{#ʹÿ{C>_|øDT|^rqXq&V*_lN\ +H%85Ă+M&Zi6w6o\ǯv S )EPZHN21'I(G$lMzYi.H}6RYͺ:%#33xC!Ȍ^Pcp[*D hu Z?)B`z uost6{_Ca˨/uQ3+2.Xxā1в6YhPddT{ڲ7 T)`:fYt))*T RM199_96e4Bӱp@nɺ۱ίa5rh‰-$ΣtRHkr@)N)͜Ѳ\)M"Bg=3\頄W 5c36 y"AݘljbڍYǾԶ Sv<؇,>YUoj8NKQh/#Es *:mQ)sy '$WfH\,Eu=-q6 ]:&s]-<28\| 8 ˺@K kL3I$Z2z52Y1^^V-mr(Y\RCwV8,ڥBnȡ8oq( 2bu:pTiaVJPٗnm!J05FT9WXPzҝ8~d;~}{$^waZEda˘E˥'H)# L |r=FPha`|L.c'"q'g0L\fe5q67O,)E%s tr\[FTtȣc<&y2aˎ K58S3T ("Pﯦt}yޙUQvuDmg#j-]mK6 (ģޣdMjvC@ݜ:>u/uCz^ogt.a32LJ)Q#N0,x_\ΝaotM}KYwqZ=K@dkSo0*?>aoB LuJ^mwM~n~~> r_o;9¦6*amRRJ@9':6J:8Kޥ$vwo9!V|rɓ,p W?ys C(LY>b.9w%k9p/W0PD.&'GK $Ƀ“fR9`IΠ,-mtX`5*XW󂢶^Xæd #LKJhKC@-q6 op``wLQuH'n%LMHsvy.\ ٭7":&o'm`9rZ#J r%c䄨b.sfݳLp5)lYD "SR$`BŶ?ZgðMihLhg$PJh]Ȩ ꐅv(* * *ؤf(}pׄP|THIm|]sOA#=7ly.y+*nIyo'F5NvI2%] 7dZeM0i2Ğ&r^5\#O4Otlt, !mڋ_lB[%"ԅBrweG4۝`(X*yB GuBTP$J*ȕV^z[e. o :.HmX7Gz .֤2ĂBa*bYHm1N} SP5sz8㾁泥k--8tH}}+; b]I) b:+|AAd̶qZ ҉`UōJ8+ҝx34ƗG11%gψoxW7N7^D@qo^R~?wP0:i0{^/7ֻ9 =n;?@JC&ko$h EΥ9D)S2 <̹1BG)Ksp\?}{y4o:5n ُcұd(Lh%u  -KRT69% IExPlw3 [0 <7eicSB)tD: !'(X>^nņ[NCZ  ɇ:5A-VfVA"Z:xn&0L&+.]*s"VQ;2/L@1!83>ggیi^gx1"k>Fy\зJBr[r){& lr(`"x$@)@K"> +bAB4%iZkJ,1scLM9\Fkd0vE[vIR],粌Lk-oY& # !c21i4 .(FmA,Ce|jP2a8z8dJV}4?ai;7K AEM %휑l>oOmL^ Q>u>,$V;)k*X=Ms4JQIv}T'R)~f"{dz{Nz^!:dԏ~.~ugJ~2..޿^[H95м-I !,ׯ+>7;+DּV Pi  4ʠ!),2F?mv߽d pe Ƹ,{ aE"]r"0mT2:EjW#Y*CzZQ31%M+:/M4ADoq)7(5H? l ؚp@ϟqb·%U"Gi}RNg,I,]X `=(qI6ktWK&7Ri!=* OF}ƅ&N!ge'MANQɘ {n9}f"0œa􏎧cjl$4Ն?OXorpiwզjI{:i7vclfq'Ei8m`Ų@'++g{?dSMj2K=ut,i|?:>drDaziLZWTќezpz4?xwo?營}xÛkZuB#0IkEAnto~Cײyh&]6z6+GnW[;p xa?Vֻej$_@W1؋ؤ'SfTJګ NbmB a}>2{^$f g/wJgʟ۸2^v5UK\8~[JS3M2$eGvu9H{- 0_KfĎK)4`0$P $Ƭ]'=C yy1z>Î; J8QD.#rDq]q }v$tL, N#:;hԘNh4{F݄ѺgSNu.VЫ0F&uD9hkc<;M1F $h05XWiS߫Hӷ֡o.1ڀzm S-`k U cJ#68R$#Z{1C@UvCTgeLnw닐]Gq˞/0ކ7Hs>^~6|i3re7߆F.P޽=os,$} awGӝX ͫ3Hsey5F eOͷ~ke-J<3і!N4X-=H>.?ȿ0Fsp3EQt9Nd3QT Lt)̐w9εD(+H"q:,^ÅE.Dcq2"Q)9!x0k5f,`ZFL&Z3OH{.ƻq.s_CHdtU69')9 r޸9oPڏʤXz\Tj]Rt5 ^`8)ʬLmҥT'\(ZIWˇ&B/S#w1xkjB;/s7HCּCIJmg]ԼwJ%T]L0 W.ϩ~jǯOWMx_Wl'[Ě#jկz<|@/m?!{i>rkO6' y7u i"?5qY['19ў3B%y v::پOXh#C)q/'}<Bda&h׎Mz;DM|&綉J7t|l =ZYK0X`"k"QKin+&9NxQ =I\w\TZaKEmy% (:KyVi# u F>@̍ '%Rrrh6W ~>]m&[ۻ(l$D7vk+֦fRް0qQH&W2q1Ί8ӵ=aeƒdi9ZIB}#zn F6amCڵsϨ ]Uhw]N|T}R̘YYxJwy;N͆E23%vٳ/r$"^yo߇ޛ=@h**Bl5>L"Gpro*+ԾUV]$̌շW,qƋ=+<\%qJRpdo=$0**I+ŮURAk)H1۶W$-GWo$"_w+)nƦ!PrAC ܠڨl+hx uU BQ^ze,:u~߉T;k@jdb0԰d<]w8*wZkJǿҁ󓕜6SXz"W&)musS*c`bҹVt:F j^VD/R*$ 3[04_RYyEɞxSN#9aN;fgh0)z(B1Α0 ҡ1A\~&, ,ެΒjo< @ZΒopu@#J+,Hn`asJREROLx$qU]$D-\}p(YER܎ZBb!AzqrM(v>ZAD29q{$VZqutcr@Uڗ +ۓUbK&|_~ٽ#؈)a /:94JsikG ΤxbDhiR\(4`I2j\U'x&/cl`8myw0itVkÃ%`NF=B<77n/3g $CJ}K Q fT(e >+LXTL9nUSaC V#D 5"ƍ>3HW3__e؟wkFf|ƽ-NAA)TK1夏WO[ {qހ*?[+ [WT=|0m}qN}qnS'9-r͙A<ƒE ,-&3 XCq(n\,0Ka(yE;*Y“zVF`bS#yydsL(QGs֑aYq-J)'RHD#i 1q_-BĈf`UZX| 9\mbFS]_ʊ]Oŭ'`MfLBRpS* 1g &!sLTRb;JN!N &,#B 1QfR\TcTD` l o @=q,vǶ}et}FWPњOdbr4{ )s]-vloؔll>9^ҷU4o\S,$DEiRZ8ZyDG85'kdLF>8f-&]zu=>z];'FLX-0|=xvn~o}{=_͒.H&R\9żN' SX4:VHJ&5iXF **mb+⹱;h 6B"5+Z00!#גJe2nztSFɕ"FE8tޞֿ?˔Eb(9~6}T41C|G?\YџuҘG1+@ FE+ "Ty<ǡn9I}׾t َ$.b,t1_1ʦu<3xU^{ #ys~<;N7v/>&iӠrH=ٌM'BS܊Yfn  BC>t>lnYk>,hV·g]?|ivޛwj|My4}Uif P54Vk;pt"'^q< g*:c͇^]93)I,͠sC4`HӜSҴUe z a 7zv ERʹSnd G +F6:VjT0`&y$,eJΚ"$w)PP/5^#+"])3RW`Z-HE-4^j"J4 ψKF  y ,Ԛ{7̑p"a,%$!mQ|sn,8 XI_[%-2eNm5["XEV/o鄗8.ɰl*(@ErU3F1;;&K -7TZ%w@2N͵)aE(;;"GD:]$9mVހ]h'ViH#X#?\m j ?-c@@ί^iް q #6痪f*Jap! bA KH!UpJL09;G١\6ۭKreċiK줢/ J=>dYV <XC#w&@()$27OI}h]I Y4}%4m?zQq7e5n7j=:[6Nrw}W߼/aKNV{X,>^+Np۰\ٝvsHP{5 ~}}n~d^ EZ8 ^rqV81gL a\`&%ǹ9չ @ZK:JܐdiWg2ѳUxG3]:3}`ڜOKm ߧlOܞ绫%(7yv'\"Kwhy3߂Cvs/Gåd=K^s`n{T۞rm:|@t@3=eϭS"Tfo5h|ukWB%P+p$VZsV%pE4v~|uAu/SQ{ط~^w7`!"vtC&۪w.W%}*o4)[ -ͩ60IC21)#knu ' 8%~M9h.[, ΅9 C{ʃG22O_|,n:_qkyjpH.8k(y;?6(TޒɫAJ |Pړ <(&&:CDJ Gk)JPe]h.i6Vky-+O׸<(_pVDM HQG뭰R)8rhG+.wM2xdd5OІ,$m@xU 4Z(@iO@CIdz8Mf'¡u T\S n t:PeJ@ U6 GWii] v,YZ\ .fi{V_?zD VhD茭[Kx;wѣB=elP $qF"LNsb^I"\xC⬓xЮnnH} ǣ 1| @u˧N[l,Gj.((MC_3R$Ya:ʤI/Cf:-"3%ԟFSN/߲$S|͊cv́{1kCcXA])3>8cT*zQi\D*,TFl/(tWz](JϒEd6g#p"@M [hW ]Bf fcWz]eE 8/kV]ϧcKF_!^F4ʰ,A%J@2>9-h+D<P N'\RD‚Z͂2*[5F4O>b|cѲUR"x_+BϪ`WQMؠ-FHw㐤ŕ3WVl%:\ LՌR睶ɨjR8ez^ q,tJ`U Jn)`*Қ95c9[.,B]XNULeܲ4nYH}e⧃I77Tv0 ؂t<+@㤍NGT KR2Ὺ4XQ&jlywq&Kp u '$&x"^95 Òy,Zw쪵Mam:Yk+Lm rVxK 4$O Z1qVxKB*CFːzўGD/YdK4c'$xt>,Ff}X;ȬE#5"qN#vMlkFG~!h ,>'w.E)P hJTqIRQ k4=G=z  5b1rf9:qɮz֋Ӌ^\)J@p©*yo,J% q,8gIۣVi}cчŸcW} C>܃ +S,_\ߺ .@я]RR5_Nwmq(.R @/$PXrBrEH︧xGs˧}ӌ)q'Z;JA4 璴3`oJ,!RcΘ\ Tx+PLe |%ɵFbwco@"g F/S̎4>.NBF|7XmfILg<#c^zuՀirw^=1uFK򋼶YmtU0V LPRJO_ YH&gq*5T3&y^bWi_8wP1dw! EUyd;W1w>Mu ec|`rJ Iʠd頋~ƌ8]nͭXræ:t9@;*Z|B4!*8 bN 0&+e6,YUyVBS"t}: OSHEvQwQW}Eo|fk N?ИفU Ё-87?ebgu;kF_xU|AϷy ^yg'NE/-^_Gלy=#W?8j/ºry%NQ.j$HYQk 9 ث׍K3 /~BEԌ.Z;b}:uWl j&y1-Ŷ: ֺanKs:`[W?5B3EA9.4RfiR Axt\sflٰ½7?09[QH/zY፵S[FˈS7Ϛ%>K;7.n``9sr PqbХt|5(a$jQW%sA*T^ka`Eg$ ,; |@:mf#f^f˴,Iu9yJ֟n\6^Յ?&xF*0h2B 4咦fӒ̳ԩcJQPG"(s `ƨ $$3%i`VUhEvȮKN}u5n7Cg>TzzܳU|IyQ~ \s⚤AVʨUdv dMtd1e{CEBG KmF<ŧఛٹNvv{#)<$wbdˎݲMIVWzz)y$B1BSȺ99Zt<5{_,}Aۂۯ)hzEY>fɕCX{c}o7o R %7X{c}o7Xk,AR67[(7'nm ]5J* 4JCJt2!>-o݊=[Z?}떴W3q5pL\D:KxX*ER!G }Y\ZQw Bjȅ` .43U @HQ@vFΎ*4= 丐+w |R(6N4Mz\^`# k&`E;eb*e7L̤BxTB$eLV le!Yg /rmzO:,3RRN:2 ۔l6ҦH3HgܔAgGE~bDdFCpS!&ѻQ1RIM+4t͹DWY:d҇KbW=0 n ez@H}Ul.5]ؿUil8J%~ `b\f#Gg!ǤZeerJ k}6x.pfS:$|moq: e7l_t ГL3DeVsYX*B ɣf@DPI:Z{Pb  Dxk?fg` QIA)%4sȖ#˿B/5K52WYt|RFvP,j֋|iY0@P Dkt2 1zp!goyp #q^/x H(\ .GynԳq|\$I$cTf̫}ڑȬr4&!MJӋۄ#z"fԚKЖqR M&A=RKǬQ%(;qD5|pD?L[z쐳M" @\RT@nc Zi x%iYZuL) L>*)eɕb"7H@e,9iY Ɋ|)* 9 !dL9B6{?hT΀qtL].hJP2q /^.|W]ZؼX FǿH({Տ]D4~~F01bϛ/|o?]`ڌ.XqpzO Oǔ9 i _"َ /UxBgodzF-J?ǿVm4^{3sTz3[t|ݛiza<šn_Vh:̛KnB Ɔb[f659?yΎh(ӓ'4Ņ;z翽T9Zqǿ=8͊x8hLxFSq<ЂtGBT>RUN UOi'Q"Gb)AptKDO8O˓c2XxP1IS_RKQmƟgw%%KZ%؋]O~B˻/7N2X&6|s9<sݍZeK2oݨ4" Ή/4Rw K{ Oϧg'ˉZޭ&ml:w@=sl*9*j{QEz?>: <p?};XNK]iԋQI;\@dC-mu׮X@%䁗("%מ2i E:$M)=(n2̣s{dL@rI"Fۚ,DZ)̀)OxLXJ[r:0+*Ik"wEvzPOƧxv&)F/d{)qQ Kq--g+uʁ˄ }?~AzN9N 0N["*qh-"(i:Ia ؤPP/E:L}kȼ*9V>rn0@tJV:,*oWZ>[,$!fQiCUծ0C8Т>hF\+ѵ!9:A؁Wך{yנrhԟ^<0[=n4{Kuk4 IT>IIVAQJk222W\!J fTS-r&2J7Nva&swrYwJNi݅i4Yb*(T[$)%x:cd r9k \qn:)NS#V +UsjW6w'$l=iGvn~ߍگ\n ;픚̗PzVb-o'_ܳ{HB[p舞g!ܯ/E/.-֖myA=.9;Ou'!&Qji֡9DYgcX`_,@ZKAF\H嬭q'd&,bu0s/K,l2b CHчดEf)99#T&\Wgh@leP&p4܈1NomgY[rn lIXȭʦ[ft؅wz<7}hn 7BFڌwz S=sxAuݶ0$y(ٵY0ֱ]v~E.=ܼw0/k4Gt<V"N3|e]>}}ǻɾ=T[]Hjnn@}˹YYZHNޅhL"ne`X[ ΄pUe{]}?:?RtÙ+T])E`NVu ovt`իU/󕂁N=X$0]l w !e'P S>&I\6u$_r>UiEUMWaHTRZGvFCC{%Ұx+CrH;nF F IӇ 3y 2g1YRQe@CC_.0j2C]hܹ4#'`k5rGF΍ZώVK3EZE^yLrcT0Ԅn8.5.NQ aM6K/uREgM A5FjA=wS~l#x#lӇh|hf)&Xn-DsBrㅏv1H:}d?{DZ_rNfG :``8| B ,]bx`)NUTjw~da& @GY^B> 2AzO6ԭ"R-jqH˒8 Ba\xtB8 :d&e08"3͌?Z\qhk] 10iIQ7)F1QD$)؏4TMzs'ƹO1ntj@oc8,ЋeB6qq =APS8as;΍)=;37'g80e{m-IFG%Jj'q.$9}mCDŏVG׺1Pŕ1w9a͕`o8МϘ9UOf> iSj6XB]]FIȥ@NlL{tV2rsjXc`żtsMRenA.l =:΁8ՑLX4}.ƈ_ȑmڣŰ1|aꎇJߚr'(?|7߃ݣ~?}spD>.:? Lχ̖ŃYx <+HH#)i>Mn'trKPf>ڡZ+*zGlѕ E~F5?kmDžoqW˕f.,цXoUN5U~}I`y2 s}ƬT|)R#EN1G,'))䕵A5o!9#;t#%MB%M>H5p,EALr}Nu֪Fg2bÍ=.n7ݝo,|гjPWͪMМ>ZзNqϏ"4v+qa]p'ADJι+YR;ل6D ?]ٷ^|#ur)H6DB-ME Θ RpAAGlZIB ̭LD9TjFa(ok} ;hꭾ%͏oyN5~ѹb^qono737:?rYBgMaџ86AEEBi41(RYck F]a-JϒEd6F \DJ 3@&2O &TGemx;_کnDf`\DL>VDD "b "nqY=pG֌2K) C I9t"JY!| N\R"є}(S<(`) k4=G=z M;FعG6IW\LJVE1. .np.O1PNU{cQ-q\(%&f9ˌN|=jŝ&\| \\aA_,Q| ׷ƭzɝ~ 9 E͙$9TF]zVWؙ僾:ok/MCA6C#¡Dv^M<\~o0i&y6-?Nhj\jlhn>ۯj#h^)鋚^-L.Fո$Ph/ZEB+{Jcq9XǨikj{Rd-}z)XE42dJK# rX0B&!D\'ǜ1By~B1P3v4Rh.HFu42Qcpu_ m]+`$y=~WܣbtoMYps ?B5.S;Xqm}}Հi.1k)t_nWU.ӓm|`X*0AI K==dr Ux h %IVH k~Ь!Ҍ=Vz'MV婒A&BhDH^1ǼsPA8hM|Nˠd2f{R[RvOns}'"TE˔TO&^gV,_p&Mnj̬G@k}ʼXڥ o]e^iά~.*]ԫ>Ko£PپA)[_ qWq. F! Muŏعg Mhc.63'}_⒂3Kv릗\b9В1 /pљ=^ǸC&j*M)"RFV(p oK%{Rd?%][oCo\ϔ I86D1)XHڀF+JjDQr5@rZ0X T>~Ԯz)+c~Wi?>i斛.HPԪwȫ[&wW( )-#Ö1R,%(,!Vp,|X .='w h*#wSrec5qNOSɄOQ#~"m-M Xea\ ~>8zzM`vQwt8qa-cA y7HwW]FzS\}H.GN= 吗н>@|CV ww?崼jefS#DV["LlvJ|s% ? 5h!V3DUN5U~}/O˓ah''s3fh+LQ/&)p9b9G?IEO)l8L!ٻ6$Ẅl hΜA>-dR&%E,>$ERɮU̪}|VaUD,"_b, wrJU֊q_ɾ"_\ {38=|TtzVW/R|w)6-1OIen#_8f{84issOPgiu\ vd cHMϳ|' 7n.SoC\ΪHOgy:qA~br@~L[A>_ӋtscOb’p,ɟ_oB/|`=(ն=~7|kHGUgg;"BZi۞pavE ?\]4SǐSl; -:rtRGE(;©,yxqi m-h IR,uS,*M&A˜sjs(HSC>'TEH]%x=ٴz[< '/WqOL]ވCzw4vsg45 [k 4SwnhN%.D:A#Ҫ+p Rc~"*y]Odd-mUc9Vɩxm^sy-4%ٳm"mRJ7ކȯffT3:qr%PVTP&J%w>Kɀ6=r%y

@;0/8-F~11G~̑s1G~̑s1G~̑spu'NPْ:GAÑ;(^#X79t#H;ty0GƍKyEd1پv$,uBRٙ4,8b4T /vY:"*r5HE„3.jdz\oH6q>Ѡzuc<;scbo YGv(/QbtNbIIo<Ol фJ$DyW hVw ;yݴ<lح(|;7/-~qi e_L\_t1| I5h:b'ҁVs$EQ]r~x=a@0JQW%)`V씋*d t}4NB:趰qldQUpIzJWt š3wJoJ]-Rr\ 6S(nۇcMtBIP\̦濍U1Pi9蔖K@[\㊚ y~G:7#;E%-:TK_߬U7|O%XơXϞ8HEg]v좳c_.$)2Z+m%AƳTTlzrHSqA%sj/_HͦelUVq-ƶF[xR["˞˃6+[r@//g.߸6R:o8ag|E$JkkkLAE-Ӥ cOJVljBP-HB%Z+ \g+xn6-vMӝs(Vtk]cF="ؽL&e GΒt^KF)&^hd*Q(VHl.Zc@FGȌ %puF5'Fh8^,|H_oDV8L> ZDE->wg&%fyI)C@*D,)XCH)":BX-](uBPڙB$JԙbHZj 5drUu!u6cllhGOKNRBKXbt݋ F^(YW\^hx&ԁhOabIDZP58{x֐ UlS-%ُ/H,v+aw|mn>LE*we* Ι]0VJCchptvn@tvn!,&b=(\X)zE%gl)!ʅ0KV I袤lޅ WA<]oQ"'{?thC*9rݹmpx~ZTYK]XbFqK+GfYsČx* m{E[ݲS)D,@ &uKA )wG஗#;hq2#9{- Z/` Lf!!E!E$dI$Qm7 QRQR"i@~pXimMYlCBq"^m6k6t@S֙ǎ ?fwZ>bCgZm.}PgpELuMcqsDA&\BYQZ$k1HFS2wEʢ9HJKTS(ԺtVA7-~a) |B#8di ޲yڕX)<FjΒ(X)!r'p[~_7fU,QHk@V[o1%/ CAqa I`̖_5;p^d S+9rn\b~g'{1Cgyf\{9Cº Mݔy_/3Ƃ.Khd +*׮:"u tHyo]槴 b[j,i70k5*F^Eɚ3PŖ×reogyrKg77!O.gɧ<\?OG?N&gC{߿zwti^ԹP} *}l iʖ/ꑎwV?½tT/=/{XaV:oi}!L^.E%:p.v`1RcE*xH){8=DʞNVX#PY' (ISEZ+2)B#xd8H@":WI}gfى7g1жLj8jj5X2CյXǢHLXdxKxâi^(!\&IS8f'uK5xDOޕAE%{V{VEͦ?=fefثկ;]bceb,_($:Me(Bu:dR]w.abT^(M;Z(c6wimH䱢JKAHRE/O:WϵADQ\ EQﻈjtI2͹fK66kT"sR5l5Z^,jyjX-W BFAXjm_ͫ1 J]^PaL=jyg2jp4b0 g =6J;WǢj:Dh(XEw$?&tB]2k%#l%TW.;*o@+NI}pQ,֫f:Vt(lZ&(dCXh35S,\,'َ:6 x»޿rv@[u9pbиBˁ*'嗽]u~1(ޥ I}Je (\Lp]QvT3&FNziMo5$24r&B- tNf\YK]},:Sivjߌ1 ;%H]]w[眦ršS?ӬǻCI)^=xCS>/D{q|9O u+cKC=M7m:7')(OOůUf6]g|?6[wbiX'Ҋ;=j5E?3+4Ϭ~ˤO.U=t6`Λ+l?0ᯙW`S{HCWCK|țD[?ggQmO_g;?ڛ_dit4?L&$w4)l&<ɦ ',g7T泏l7`u:]d}8qK6yԜ5CN]Au`0)a,h=5wvv !CA*VBTi)dLGKv9i&ˈm mTZ5UŘ+ mbLT" ŢB@QhQZ3A А\yK{o[d^]?N 5o}lK bGD݇L$J#C4],v%?/@@(qbV*cjwAa\q+< ϡVx&60 ㋗X*m.rKA+'{.T@E0# lBw[l-x볝—DJXoi1_bjeu2iwgG ^Zc;2tfTBqd^&l~u*ZnsN wSE d$b@02;JBCtKL_ Զg>ewJCt^y\X]۟ruq7faO[3Y4)Hge{w!`RdF*dw%S gy٠8CsxlYU'w\!`Xbjq55|<7Ue74}DYsX MFG/DL@3 !;3Z. Y^<h2;4BOjYr>BF0gD.+\ e "$(Oz_J0ü:f$рY8U23|pVEJƮ 1eXt?WedHx24:VhyTs Jk BƔcGk3a'bEF0L2>t9iUFUIiz}f3,Q9ṫ?=mqy.{YW~uF?Р eň`dr7_u̟գ=1(8XIG9=|إW) ?d;0;K,zl%ަ T"{%-b en?u^DJO}>qfZz?N./@5|_B0)7,W‚ok5D>[5>[ h9_uhr|x:s^[>1zKo|x^yG8G"Ebo4 ūL`x4Wq4BT/G}M}GHbIc{f5KMO"h6 rӬjtxJ#@("A:[}νWv/mދ3?%< eD z{&od_A}MpRpΡRΩJWB bRwÝ^Kjٟh85v~|} Y.(n` J\YK3{>:/"iDh-aIZzuݝJw*kJϓ*QRK&.ϒb:^;W->a˻mbxqv{ۋazAdZwۧY?\dY-ze֚E}.ҢMu1\aܾL{_}1븼;cDYȥw_ w\sR,-X$¯9:hnsŨMxW,5֗(gך].nX_bkiw}iƤv7iIE]zD$<"'ʾrR#('dh":CLdP(gVP,*1BW<>f/Nu.*JLq PVǨc>JF'9E R;k|\Mb@kmV5Xer.s5|\Hʍyc>0Sc\sJI$$IU "e׉1.WHJN'뜹JFo>a ?6oqxsmO髧dK)ω\ zK8ͳ.=DjǗ^&5ݕGBYi Z,61v#ʟVj,g /&8=i4לԚL&4-rKx77/['E?6}7^Ӻ9kܗ{f"r.3o)m=wֵ\^$S% WXJR4ge)c^0Tm{4'%ܑeΥYgILn@h,h/2)hC',cnPhcP|^n@Zg[r+7t9*1+jG QS">i^3]˞e]]G[tA"˘P*̐٢61"Ds dZA4yѿV_<[ ^kѵtzٚsrٯmc*=l\3ԫtkguu|-&aqCA^@>IZrJiAV,0MQ2O%`-Gd)"I7饭3u) x4 Nv4UdHƒ:_{$*Gj@`GK \ )xf6d ":AIԶV7 sې5mzmq>ҧXOQ{{+[j6p~:9pv@/ozb9+qjzeע^VVSߞjuEӮj_'yUMҎR=ƪc8!YT|4N;2p1pVIBUWN)UI BCΉؑLeWrOTq{Q6H JmBp!ri h$:氫OHe)kvH5l*Zy^NG6>G4og|#,ϑA92׬5zTSF{dFsT&G N@;IHɩ]p1Qe nM-Vx2rbչ/־*k&|\Wt#yE~Z}q^:æT`B{fV$![ը`ג##'Dgs^1>g|z^M YXVȨwlv S%0dal R(jt\;,&|IC@u M ґ *Af &yIf:|O`;u헔}\m,sY% sIiCЪծ5`ؓh$#ZP>:y"=I==x]S(fULuӁ4h ;ݥ=˸]z{yꁊh9Hx[c$P%+AroMFnb}a>җKRl BXXLGDIxeLdӹ>f-SJ8gKތ.B{0ܐyqLZ% ^? i|r~_BݶNQGz0#vFYѶi_-φ^B|e5u!ni^ ӟ}O~F&Eɤbv<N\fAygaœ\J \(< nYVʛltYKg<8km{C3e&3gQh9=Hk|Q8`+%8 o4ɆV"n;HYzaBuR@u5JZ;R3WK*Õf}7mL떩FG0#]k)L "Ivh]jzKQбVX !tOdTaek>Nswz(w{.>Þ RMQǡ@6^c+{b ۉ q)l'SLa;)گb |b +pWO= J<7ޤLgxPEv|Om q F*HF&dp)dt!f#b9û) Ov HOub'0hʈ_6R>s yEY2) ˒1YyN/\BЌ;nJ t.M"./Ըx?ʠ57e2aR%M™$bf:lZδY;yR.ZZLflixᛪ첛 b>qa"٬T,d&XA"&P̝-xȉ,vg4zC<#x)%Tw'·bm{&\A`v(cꐧ)tYk:_Sk@$? X%y4 !ReZ&uVߚe 1XWEn?  %VmnBȖR ^ x%D54{0H.HR1T1`ʵ#ϑYjɛ̅Ʒl=n߰ȥ5 q8թu&LJa>[z~S5ëO?\^oGPѸXl`2 *>hw]kmV_e9Yu{nۦE >mJѣ[Jeaf 4Eg~3U_ 4pPy T)&,Ɨa=H3.= &7觙߄7w.t{߬[ m`yĎ?>wq4jUtb?6A2Eן.(1(\=Dq6|e'ՌBpɵ%ĖZH[RRKă@»j_MGI֬tpBtV~ uEӻw|uR:R/խ'x;D*\JD$17IqOibRL'mۚoƟ}i&KMmKst sGc*/Zޘc|^yQ!i*@2wU:_ȇ5z ү5d-![̬q ƂqΔ,~_%1y*BkeȌ/ _x@.ڀ5W{/MoORۧVi:ڭ 껆֤]ytV7ytkT%ڎ{& Ci{,!j{mfyn#&0nD!7k& !;y5 DDIԮLsJ?^xå e\Sj&JA",d8VZD`Qt,i֝''C~|XG "2e! qtǛ(=\HaC'(1SDu*μfÎGιDtB/Ky9oPy6޸2G[o_sjX'}1&2ƒUBO d( wI]gJ$d?7aG393SMQƺCK`YbЪXqb&=2#dWfsE@ Ũ,'y1IVHD!0QQ,+.9Yg^:p.M KOSaiٽLA<@D ܒpL* M 7zRr+ 4Q8ꌕo.=v+iļ#RSr0s2JaQiƝNy%``#AyͦATP&΁CcϽYrrJQ 7$` &r'q:h+0XiI:& m"PD(yrCDG-^3iyh23\^D`*pHy7.?hA1#aHBӖ!xZ*4fx0j,W^9慿G8&b_枆# Dס`.H~!D2a@r ;"r4DGRP;2qQmTÀjNKdWrhN鹳[Y%. U:x?_ |a*|=TǗZ+rW;^-NN'WŏJBMAXGn^0lMo(|Fg7P&]4ݓ{ T?6-/Epv1L5)3kb*{R} `O/L5_~^^]5+gbVMCVN_9" )2 V0b䃪=[͜diVJYgjUϪRYf{$Ḿ}34t +|p1h|w \3x ֫x훳Ozug?}_9D~yzu+0 .[E&G#3~yaTm9]-4#aXqs6ּ mS>o3!-Eّմ͏7axNE qɀRKL6FgTW+|Etw9εD('H"q:,^Å#H17N`BrJݱ[X1c2b=6M s,!J]63.Cռe.ny|9GJ =h0X3ެusA;~)|uބ&vX OHxΚǟuym7m?V|̺*OP徕R_:lsgfV-h?r\aI|a̢P*EIr9ש |&c}Hnob'YI_;B[ >ڱIªboʧNTV|̛0{L,c05%a4MQmQ'($8;8TdxkeL1abLrgOv$ |zuykO_xQEb)uy5|ʌL#,E>Q INl@F8rk`O?|գDT|lH39W3tHaYm9/n=`1zoOW1!1NtvvJsdєz{oK9lHZ*$CKK)F4 uy2P,H'E%N m!0dX;*A3jϷZ 2@e `{tzjL3&$a?b#Ad0SF+"dEÍJA qkaS̺͛eobдOS + ,vyjH0lB{mg<}ñrwoٍw|\:ak@epTDJF/B^H$ɛ69F6 ;( ȡx &eFTyn 6mD;݈%4 Hӈ4kjBeWVC*Pya^y6jyJPUװE,хQr $%v@.(qCvbL)XpR"r\"gF `i-jK ^Ȍ >U_7sf"ܒ19%c>O]e, U' * oeOL2^l=6o:|ƓXyθVZ@N I^(T,H\VM !{D%$ItYd EJGt`Z$T%v6r6Kl;y0sqǮR[gں][L O8H+-eS ŀRg2g5 =FDļՀ#88f`!2 hGYv$ǐ*@!c=AYf#gߍ3凊_nXLRJd(5w\P̣.qOK+ $>xGc0pFaAV㝡2B1若,XFUjytJGFY8U*hMF38W\#4'(xx-4rxJ30,U#]h^ۧ&*0S#x'ѫZ^o"3UE8Rj!^[RaALSR2RG2Yu-K0-JPBMj/tB/}pٝhdYRxLQ}tX02ITnAFsŽY$fyc{ɑ-G^A/&^yzIYO8L~s"6ģ`+"W:K7$I Q$BX`cdZS[/vbCњeJ3CZ"=ah${UՖ ;iZuM?^e=KU^ ->G> /@CKTr,m]r]_v|h[t6DplΊo.eNE?mщs)hDtU[5\br,tU 5t(;α[_2 ʴbS:\]AaGWsQ qy|:W{Z+QM W]WP 7uyA aϲex[E|{[FspQ#Hu,eOjy\yp40в^T{EL~~;o:+\/WҼv&'v7_qѾ37mtMWX-fr6|SyQƼG)_?Jg4G%X.zv!O:ɬL*szޟHk4ݾ2W~AY3~-w[ɻ;~9Smew+_o&OkIMCp{91l\H=#&X U%SܢɜJC&"T-ɶyN>Ϊ@T")׼O?VAk藷zݟdP1гAv is~>([Y\ww443rĵʠɸ)%; ^K(Lq;ll%97,͍X:/A-_dԦ) |;8M~fw}djO#z髤fs}mo?Z_kmՋEpG:kb ͻ7_'w3C"^)ps{sLoIsXN WkoQv*Am"u_P;%$2t_An#U꾝7xo-#׺L_BuS>,ff7MᄼqZ|>*黡.+psG?,"+?Cu2yU+S׏UgozFd)~=Kd]tȺ}JiD_MWd>=)0/e\+]_>;щ & ZBsaQ<] mXjQDkW 򪹟Va%z e#y_\emq`"+ /ƌ9{5f s.GvېßNjl4Zv,*Z1y6Xn)ZeB)`Jo>${t#~)kj)%eò{D[M-x1Ymo7I3~`#u&n,m.zv_40u}3n&Xq X.Y=.z׺=yf FFTv4Y+XVg3+$szDthUY[њUEiHWHWN(%ňv\=- *Z>R#]}5t%4x(銤1bL$>+otܝtJe䑮1QWP;:NWEJ$GDW,wsNJUE_ G:DK661 -w~[.Eiۥq13/29fec`W gI{Z0Bv.E١ݤ5S7Q<ίhJˎQcXr̨KGCW ]Y|ITQj~+bDtU+G|UWW|#]hK)Sן}`λ|l'gvn(i#]kzjDtU[3\X誢vtUQ*s+3zDthl,tUѾʹPnl#}á+`j'p_T>*^Et(4ʌH2W,Z;bcHWHWp4vDj4tUᎧhUE1HWCW@ӈ ` W`Ehtp#] ]&Ɔ_C|.lcgmKh˥Am 8Ҡh'ʍ]"v2'mhNjbݐArSQ3w.ۡ e;vz|з'T׷2;on냗>;8^dњG\٩=ӗtqON]GKd/s(!*@#(st~^畻̓st#h&G^B;-?kń_&OogD\Bd^Jl\[I#-Y |̀f2_VpkQ<9^trm ɰ༒"+5LI F<)g6D͒% Grd>1PBB1IQ$LaI`;n:+!HkpCE\XC0 ]2bMrh'tDKr&,>њ}Pa $C0D 4#9S,ީM-J$qvq˗JQ* 6E1Ι$ʀ̕ PR*r*sbU_S—B0ÌE͐EE72DQJ PtP"֕g$gOGs839bpNM 17hmb"e7` B0 MP+3 RҧR)ԝP RZƝq^"6OH3Јrў}?曫&3k\D%57 k,Wچ@.,AJ1ZyQa9"s1JqH%k"eEk\q0: z;}mv*zHHIߣ@K|ZR T`bD-Xd/ "AJuB9Å@$uEL غ3AoÚ:cU-VE|dm&(rhbX óT^fH9tdJG@ES^bU ՇI(yF 2(W("@) >@E&rZ"d^e($ u%u'xL7 <$^J u+cx$[4 !γa=Peъૈ;2 jQM uY'aO+cpg)^n^EYɘWƆmXWXk=tdHc#.6pi /zsP6Db&3/-0yp(A?{ƑB ~!%afMf}5IG<&E*jgl8#lU9VE^!ARhhJɣBp[G_h Jץ;6h AEo ־K C1-1XXզV29^ DhRqdS94qAsFfHQ (\K A PkCFdU r(yXd*5 ( A&vV !Hƺt6E櫶mfHzgq;4Vh@fVx"JMӘVZzֈUYA-wt7!$l4/moAK%La &56ܝgwE~Ӯ.c̤pA0lk.4bR:EfcL aMk,Y:JmnkY+zZ6m]"9/%p1i e6/qׇkF[5̈mb# wCD#`EȡmSƪh|F\^"݈衩qsK+"Q;%FL] R 4Fi )De)EEwQaK4vc7-b\QD4MeM+ %w03b?$o5Zyj SVB)Jj"ȣ(iCǰG=: Z 4?=J7ڈJՍA=Fc &X{u[ s8R'9QG=D1IѴ":HI4 d2-lZڅ9OJ/Y%AiURKzfk6+6;|X@@Q;Xa`-E["+KEυDBL pc8@y0GQsO:QX2I j(V!x4D<,;j-5XU. $-1:V$m6^MH*0DхK{Pkh3!]t&K4]gF AzgImmQwZ6FQb> }@b> }@b> }@b> }@b> }@b> }> 0a!\>> Cdwv> X;Og> }@b> }@b> }@b> }@b> }@b> KQ;䐂dt јl|@. rQ:>1D}@b> }@b> }@b> }@b> }@b> }@b2HAe2}@}PT]/TZt>YZڵK3hjLq6tLJazHi. E_+N[&ƛA?mBMzdku3]m _h+:^ѕ|soe]UE*M5E #$!lbc&66Mllbc&66Mllbc&66Mllbc&66Mllbc&66Mllbc&66z#.-5Rwrr)5ͪIZ`bB;9trC."j"wrrpXn*dP%;ub\:1zDPRi\\9}^JE"\Ay)# jḢ1*2F+ WlWWwM(!f+b6\)5H hk}N;mzs~"1ꈿ jgsUz"Z1F+g!#\9G]\/r*1+R4jBEO~$.<_F >sa^]ўw_%S4%8(^FyGu)t]bӄ`+cXHn&;ZTƘF+'\Ap}\7v"G+hg:.fFd@#l O~Mzy93Jիt-ko6/cbճ..~JA;M6G݄T㎝%U*>Wʳے}].瑞|TNNHKn˫yyZo7H6oMS|TO~lhKtɻyE SmPƿ^Ե6k-Gj*TюoݻߝmNdHV|uFO(m@Juw@{M"4yZZ- ;_[}5<٠cMH맯mwڱڶ>gwjE)V#>ԒFKJjRzվX߱TX9qK nU?~jf4s0VA X|&\\.Z LTzW#ĕrp^]\suz">0F+6.#\A*\A*\Z? *8}?e+IIqE*g\WErj'.NnW6 ~TFj#0PS2|H 5bTq{BTX3ԦX_=%u`#u/6ʺV%U-"zk:U)#%D3BG >(B d6!=Z^j C1H6QB9M HqE* ?#bpv'y ɕDWVTƈ+V+b6"+uʡM3WN(r`}3wEj:He\Wz/iH < 株p] ׼:=cbzSOdM>Չ3PK2, 5,Tm7"R'K*=Գg8Q8oM|w/w^& Xfi+M.&Z Ӥi1%WɵDև *0/WU)ӛN~rՉ=ԚbZt{*2[ȁe4Ku6"N+R{誟W#|iuFR2' HC q5B\i%Ϻ> 8~rWZI!+R%j2JŐSte:+kU."W2qe5Qe+~ + z"ƈ+S9EW$8lpErc6 z1x\ʽZWtٜ?ʵQ%O9GJ8Ե @23Z[ۺk{[ ; Dd61Un1 c1Qo+yOr}6ImbIR-jZ3 <\'|."JWRst ǝ^A'X'=nrS?M2Se#){Wzi WlWkWP:2ϸ!FpEW$7\pEjc:2F+m9EW$؉lpErW6 *Qa\= ZƌpEW$ι~j~ڛ-ZLdi?Ӂ.VL|}mV*|B\R;Pixe(`6#\`ku>\0oR9mW9=`}6"DW6 B}Awz9NSBOlT6{4b`&>WquߪZ3;HπTJ(W$8lpJd+R++Rj^:2 .\A$VTjqe1f+Ff+kHij[SPhAĢH$7W7 QO^ Ɵ͛{ D/~t޿«ͷ|`9gj_|íoЙ{~JpJ4v/HK7ow*~IQ:"sEV)|17?tJmR}yQ,gM6Ҙlc :-knaku+k_6 EU N+`kw_ʷD,]*Mzs_oTh^ |))9۪ BdD`R.*JcTh[FmUF]F[9_)ҞX mEg7n]o#dHʲ_(Yez$QذLq͚ꮮb|-D<9oO'ѻGzmH'̐~;b}pU6w43G n?i6o/rQKr).y6QcJ&: %R'R¹tjIMP|̺8ů5W̢dY,D+V!`pcCL݆֩f>=غ ?4q:Fw(}O'}U֔7.a u6cz6>lS?̦mPRUsg\jkk;Ztqw7v/·^(*[㡐dGgE*:q,G]س>{vٳ/%s( =9M('W ɀQ Tؙ8;vb ]Wg[Wf||HKih~/wꆆxX~刭%RIF}0 㔱\I^cRU H1"c*P=JR6`0X$Y%/}Fo*yѺ3qv#vNFy(ݙvھc=j >$yꤢ RtB0P ($NȺL.J)m,4 G 9*[Yh82F)"u۠~.磔+0 "v{""nq^="nxO>۶ sLJ98DLAĒ1-PSa,rl5*XPɢ,gB 1.F%ZE0J#bgFċCڸi3-c\{\OKN P+-1zbk'? E·@G׳*8cagڱ/ʎPaW9xwXG |೓5r`VWiy{.Z#ɬJh8hIp-'li|u9FyN:5.x\SK/r8NO]P#rpt%fm:rp3k7t\l 015#{rlQ,9Co xW!~u뇼񈟰}rЦޕ]?CgU[F+C |AҧQ)֕x;Z0':n&RD 9F#D#6Ae )vja;c︗,;-5a@(:b.,PlkSÔ *W ރ[]%pu)\ !>5%j±d"'s; tW&k\>}WigKζ<74/ _l㚕>5,ox9^&iY g⫨ЮݤſW]|:P&516n18b9+o@P 6RIiTu{Dqqݫ%KD6 `j#XLPЩSV!ЪRh Ens$ʠ#r%Ce\8)SQz F2Tf- SH"|D3qqg(6"zEv1c޲t릳3/e9G.sPȐ9v%Y6ހX80-6% ^B RjԈbQ&6 Mx`alHJMm$ 1d!c#LF|ź狺gcֱ)h[>͛vww7ŋ6o:DEz3fiCԇ-}L+Gt^ 3t~|Fce+0ʒΈ\(t&t d>x~#it/ JB`D"$H8-J&ȼ&rPPy:r d ٢W1xpIә5S%ޭݕ8XfHgw A7^cu-{Bq#)9<]8VڵcώY>;1(wۼ#t@rѨ_' L JRc@H@2{zrԓ%Ğ AB*f7"Kd:Q |T?~֫Y/yHQ3RSW|=%}泧4 yQ?aG EpJSΝ4u(j‹)Nd:6j o'ƻfo_I@~6h2AyMO`;boD9w>yq{ePv/砨*1ڱpGT) ţ ؜a: wځ_Yz?פ9N&dOQ nЁ2PJF룯1~!{@AYM}5M옄`(%mq,dfY]kMGB|^(twͥ:_yPW1|O7ae c͵ϪlHqAf5Da] Ay=񞀲BrH| Ct%X+FUjۂY_ϧeٲ}׷WoWcUqE?La^K:,[[ jK#Jkm %xScUI&H焾gc[;KGP[{ʲ=j~:eavFMA*Q4uo'?0DAIigS)LJ;YAJZ8TX}avΆU-C`XvL ;!sE&qd4qϯNF+חwm-uruJTX8 tʙc19#}p (R5ULp`=J zK h$8K%Y+'/.Eir)\d8 \iG7n,8 %[: ã2]CZhSxXs8B7l=th,̐ƽ< (QLb",-TPyWdd^e'd0a\O:4Z67VqBޕYuw3= :O݊ڝLs31焳`mN w(.f6h1b]lSj!4Ef٠VcHO80i]O7qhy28g.Ij`X/Lzm</WCo3X Mo櫓AH.ոd?ڒ7 esV.D6)mFB20"JP@;ֿmBw#fKZ3xTӷ[k︿u-f/D'+Fjl)#I^b$r|Qdx|yz ==|mȓɒl^%3M J ʇXM O **0y:F`\>gF0!g\v8Ϣx Ic8  ČtfAVߙ䮹1XoR-6PqdhPEFa|c#W^i|YLi^~0FF$cttTJ9rN{bH\ʫb7=ͱqPܳ(-4[uMoOG4íu^נeovͫ~~>Lx0ⵥW'4F7 7?uHJ2~8ԃI= $ؓր=ekh6jͧi7;/qo/XbM|hvCPGTCD5^,գNڗ8_)W:0 d4|wy7m74ʣRB 764Jg BhY_&x8b͖oazVSt\;cK߆q& :~<::hD1P\ּ'sVEkO?͆ WRz_4ZJ7ޯ2zUU?UiW*UA62o\^ež~%WEᯃͬ=):?{WƱe // q<`8@U&$ )%vfsBKS6I:U}ԭ[wy޻@oaYi[t$] 72+Ai3e1̹ȸW42ueѯ.}){L/jGȯJw_97{Ŋ ρ2nƟ7cӐ{AAHd+f =C+U[~Ogyt*+}#=Cš4ϒz^gAyכqmOČG}^$P5X>bc٩\`0@uXH: жŲn̯V/M~xw)\a[ ߩzRcݛ7%ܢ :>@~AZ:*1; Wv=:~ޝ{VeŴHs1{íș0̲JC[K.'"wZ&40yBMJ*yliVvmn|0W^yw)MjCT^;@|]yy>YPLK Q7Pƣ}^ ''6O;7~a(?py7OC˟w\I^p|*_SCn(ELȰOgvL|sr6ѡN.4Ktyo.מխNp&<o6{0 ۯ'9"&?\6 L_joލ A7? U2[CއEW=hf4xyԫ<^=CԾS˳ӄY K5ҜƊXHX I*`Ɗdhvi:t_<"mMRAAVKjNF" PqɺV9]ְ4##%sS.Id9!SQmvDV(<Z͍0ITG+IBMWM6$'@rlNye' n MrpqRe J:)\o*-T.ލ+%ߪLtvGHjiX.hr`Eob#,!it>WJ޽>Uw&y ~-TL+/|{w͢ra8v|տ8[nc` QLJ .|t^kZjiKB>ffbO6(,o KQĕ|8<uvE!K[eduN.kuY_-̬1t`翆Z_SN,x j^P2ar/o>8ߟ}w߾}N9ݛナU0)`2\ ᗝGMnдiho47bm|v%K}4}~|v'FS3:o{6Η$hYnjUsKbzoq.00E W-Ï{%ͤB8 b.ˠS@/* }A}jWqc( #%B'-_FEL1VSb%'!}5.;B{MbiZҴ`GԾ<'?n.먵b>BpRdAy\V:oU4) .GfsVF[t U1h&鐲VkNɥp!jS^Pcp[*D,Δc<Ò;[j^jPNm nGӦ%$߮LU~,DB[[R5LuIE3Qg]?zxs[>Kي@6Tuk/`!BQlԒ+'Z"ÜL 'Ov^n=R]4+ws[a+)4^-3!NdБ)"FZų+Nu%3$.)^v[ќgDprSAv;>^]zoFn;.a=FƽJ 6>?}Ǐ-%gq|.KpqW;VE`R_x!]udXNǼzbM z 6)@gQ{*a?J,ziXQ"Ow35CD+M[Dihzk/G+]`#sցpekn<]!JD5tp)am+DdCWbW/Nd/i 8fp偣6CIʦmt%:S)OW؈µ-tE"\ʦvt"tŤ 䭡+h[ jtBwtut0BL M+D)IGWGHWB t I[CWWaM+DЕTWv-!+Ke[ rtBvut4%Ҵ0mf֜ "ZxS;Ԫ#+MYȎܘRf@ڬc >M,% );䬋Jkb^'7{mѶћ,eBrj19HB%K6nۖՇʯJfΌ6T ѭ+Kkm+ٮB6zz H AzZroumKOx,:$WUZ{:X+Fk~1Q:ٷyt qIk)jEi1tZ% ]y=qk9}{j?®Y7t儮;}#p~98]n|W5NWLLbEW6C+FҕMښ% 8,d/}gp/d( b),׾@OW\ѩ~^=CR_@J#kWHW>Bbq1K@BWGHW|YtK9=ыDŋ^mbw+O:#>OJk1вܗ>e?Qڗ\C4F.Sa9+8 7.fѦttJ! }q}Wfus_Nθ`W۠Nv 1og=?.Ѵ7us=nxP~򪇤3u fczǿ?9!*]/ޡml}\"V>]m_~ײK i?A|?<>#~sڰ|1?W}9vm̮a\5x7NWr+D+#?>-(@D 4 ;s3ebf5x3]̓ g =#ϡn{?̧K W}7z@z'%l(X5wN7S/Uyz7|W-O1|Wq{ od6>C{:Bj}ۻe7X_Rfd]JzQpSP%egM z*8]2ts):o ׳>?)ݪJB*7e2θKi6kŅM\';:[;cH;y kkP .B@B{MP*ڜ b&{M$?'{j%{(*#i Rڎ(9ٜ\?]mJ yj*pb)Kv@I()wFJ-pϭ?!мd0cF4CbW}BR@A-V4jYЂ#7+ѽM4Yti:Fӕ%cM!:F20T`&?Q,c1mFM#d m* QN!yD!8|Gz]F\RI}{qk&bTgQ T,Jv>Bn3rJ!wysYUuP iH%5Væ tF:/|XvIkNֹ[ʱ'Ws̘HHI?̷2Sh*KgBrsUTRJHQR$DHT8k]6Вj4 I{mg͋j#r}HF 9yRըcYB Q|n` :`֞Jb=w$C.聼0Q9 _&THdFG>rN rSNcG3PQCm>+h-aS>83 AL7Xu] Ţˋucm̭lLDIYـ0#Ȍ,Hw0`T;'*Yl(E0b5hS5B-E9V G6NS s'FWkVَp5TM`ܝ&EȤh(:\d#$ ŨQAQi+JZ26HQc9MF2mv^Eo{jPB]:=n +Z#q2FM[A( `-ePBdBE@i'rը xu>ZANo:M;*q b LN QCF:7 d&/sAGM-V│D]`#)#MEUP4kϒ (Ez@?Piu `UW $u9vϪ J*H{m>6Q,f^$şL'HH/>XԠDtP 6C Zb@$F=(aL! #A KA[X+f1*Tei5 !pBK.}yzv}k^\Sާw*ַYuWw FfL{T^%y:d6Q t56b ڸLcuLC=GN`]@EE ]ʃJ I"92e(X LRQ &(!t_15lm;Bdm:Xgʰ~d}:MOU;:@vTVB]+!Hb!ҧa~|/7ggx|y'kWمX#W`>^b= eDA!t$Y B$*jPKB*q`(#(vPA~,=kpⶦ3*Z.fX;6@D]C:x XbU30be@0 9 f՜%C8˾L Wqӛ @׫u,|ֻ۴W~Ϲ+f`.5BgekOak0=ཱི!bjѭךcfԌQ?mAOy~5801^>@|5@RMwU\Y;Tڠ@4XŧvY'XkʛrLpy/FF̆ M F:,|d*AنFp*qKކRZG7$Эxx@A4T\R- -1˱XTZI>"T$b4uIJs't\5wwW,"$R R DŽۭ^ ci+nl7[oXJTz tkR5B; 6*\}Dq^`Zn}kokfyZQioս[`w-V=7VZ/ANV[k>;ܿnyuӤwG~>m|{~ 0 柶yܽ?7W[ӻmk$A4gVE+~÷/OLJ6ǓկNOn,xv\طmܵ|aH~M#kCÉ';<I/U^W#0Z{E$Zϣ%z$> H|@$> H|@$> H|@$> H|@$> H|@$> Y}@`R4/X-((iG$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@:RP IgA> b|@ 7-< ZQ->cYel$> H|@$> H|@$> H|@$> H|@$> H|@$> X}@2K07,6'>cX$> H|@$> H|@$> H|@$> H|@$> H|@$> X|@_lG7իy)z?Z?ݵvu~U7ﰀ Zm SZmP*ؖPt%|b[Zm?qk.(8]8ik{j?F]= ]=w#uC8jeBWւOWD#+zDW 8-G]nPf)th?tbF֮0_ޞ[U>oe./6W5:f*/9էw[Ys~QɥOgӫjXпs_!ڳ~߷j28|s2ނ;4tIFp2]&, ٱYHrbU^m[Ֆl5G8q7H?Vd .C% >i=,Q*#JXƜf('tyf! BKo4a!0?>0¬~} Y"ަpif^Rˏ]8eٝLS2,BBj_SW\mw1hw$eCm r}*Uuy⹚3&cnJVV*!}T#էdbxHp79D×քi,e"!+y o_?L,i=A#S Vw0C8lV_?ޚJB#0'kcl3 Q*Zg'hUTt%&tB];EZQi"+h)L3~dNMBj=ҡqPʚC芑jt=D?`Mx4tpi4th%i:]!JE[:AbGi4tp .<Z:ATRC"+̈.ձ=ґ5Qr ҕؘ|W2 ]!\vhe+D'IW[Y]!`΢++X,thj:]!J[:ARRW X >?]JATKW'HW0ji,ԫXՔg5e[e~(}ܶ;*b^ ^-TʂuvȔ7*a+hW*zӮ:f\ʘ3lJ6 *Jn!JO:3LC#+L.Wt(o2#ULaX ]!\RB取b+cS &tU.;R]=H`=aKu]іsʾt%ޣPBS+DWXh J ]!ZxBZtutŹVEDWXx 2 ]!ZNWaPtvI4^jmeq[Q(OBw(ƚݸFP2lFK:r^r,b0o,:uJ5M;8(e#7.~|?Nu//.':9 ,|F2x?g9rZN3gqy<xu{X3(|üq/a2@ήWHG *!XVD({LzͯNr.3ˌBu,߽{&aL낸EQ/rm2Lh*P sTn$7_], c &!׳a6JIaP\:&8RkBB76g<2j >kKܒ\SP`3lfu GnQ! ivɪՃG3_]+|^.k ^”-? &Zjk+ƕ!W7d6t˭5L:,?36,Cuiar r'DPא^q89cb(" 0)$ef8^-3U`P4/@.‰X!3k4FDp0G=8x, s"L~YU{"\ sn17F F )QR`:5n-؜ ɕ 4D%#/AaRޠ7>8YY!s/2q?$CT~>4wz:C3Sr=pra .Pq&Ip>zּ˾`" u0I>jX)=j<#:xOIhH jD0MCNXgD:&p7H6[m NdttT;Έ6a=\C8YoAz uRAIէe8tyU䐳/&H-j<ݳ()\1Bٴ?d;"#zb~0XR)Uyd_ږUV$ j/`ٸ0xcuSI3eז$66cg!k {y KVę|8,z*w՛n2E%7 ,uqX,Xu8.}"NGgC:L-[2U 2 f7qˇ_=[?};xku-0 L M/{dO5*[͍إh{Z|r%?d7{ojW- ^[$ը&nfM{t!]$_|a_-WJy%E] @vb@x"PR&R)>z6V6QNOj'qzvp K=9|z-4Z+ ((py8;IRIpXP^Z50_wN~^uuQ;KUoіoyZ38#{0#4%WH]pExf@ڢMSVIlaT:YיhO-Co]_d▯iQߙ F^kr~>;9=ha0%2B(;-*(E85֪U/ɻgE0B-<[ᗤRgiiozY^ň`.,lXtkg`h_6W$!y2 ) y(NdADg}`mAA n*R|w]MoBv/)yu8L'gnKW\-jPL!tznϜtl'k7%rTߖUQfAZ3іɶմi =5aWU>ru*y*Sa8I 2<"sjթ ف\5-7(*{(k3$>\dRZ3[h "8g˼Ϩ3`D18 0C7Ԇ{A3罥B V9uw;PLw08;zg \F ri:ɠzP7?nĠ\N>wpl}~4\2OYgݳejtng[MٗPӑCEh{֘3 ?+P[=bH!u ctz{М.<@[R2Knk{9?|9WRnp%뤫:"Aj2)6azkZoY^.=ҤlÚr"k>jeD:F}hW-6?m:l>CRµhh!)R!)CʕRO\6ֵ4EIjɷXEC؃+|yڕ'sF.[fy+%B"W\[.*(93x67j*.%G؜ϥ f Ħ@ fgmlFIHZ-"lay)&U`R'g}׸V}$8҆kp u3\o'~nN~X!.(jLlN*;빕bZ$ҝ)%5j dooD\^V4{@%pn*~蛗79. bO>)?q4.":[ &l2uaȡ]oQoEirqR(Q!TmI@536tȍN3bPciէuJe9N-X%!R+3Գ& *S˵56J5F$˪Z텵+5q;Ah3Bdȩ;W톅Һ9jsɁvYɈjsI~_UPT~ Z (^b]tcQWl4ѕUj%] {ZJF(" 9 =5KB%"aZr*9LN\@Dޱ1a1K[Zwd)bBhRIo,ĒfK ߼`) kB'-ؐ$( *$H A6T0")}5Í>51]O'У<4wi?l;sKc^~}Qs{f@N3{MsKbB|mn8_=کu֗٩Jc(=bd@d KS:k^]ZyN]Y3"{Xl@)+g`_|3P\2ds9drpRRb@dP gD ٗ"d佮w+ͭXײcH|9hl65ZsSatrNΖhl¶Hki2h5:i= n ﯑rfJEv buv٩kG~(/~NBps6_R_oIIt1eHs{e,& eO)Ź(1}P:h|GA1cGeS{ճɎE%YLdsNQ:.$N+BH);[l3W}^zujjM`Kί +8EnyN4V+U1@߾&mԜ80.yLK `sCw3h^|D/#䇋c\}-j2:aYCנ梷o2qE!δB\; U_.?Y ,*JmŴ<T%fơKr/%9GJC Iy'2:!mp)}$'JΊٷu'?+n6Ƚ4V3Gm3C-1 Z[soCbȾz'[1JE4'1PK+<817ȥ)"YՒ.;l8@Ox2͍E`ept+<:JZ!J\-g'I<Vh;g1ƲGuuͼ}A Na5蹔hRAVQyaX /Bпޑ}R >m;~9y%ZFM^wf؆qYs/coxMI[|7MPO*G#b|tD}(LSk)S`)H5X$=m˸_8}һ'}[mws1lSmw?_}U zܖ?fݔ~*n,%mziW\]>vZӳݏu\Rl_|-{oųէُzwѿ9 rߘ{vKVӧ9 %[3aǘhB֪Koߴe9$$YdrR}O$!ޮ{e"7ɑ1ұb7}p >~9_Eew]wݡ9)~;x|?q{',TL&6t*J 7ѷ%Q$gij7kLD5B,_!B@9J -Sjfk){ըZ67 |'zv>;;o6ZˁCh|e Ek@=N.AR=Y|S(PřG.7#h=P`ɽO*,&P&is!k/7bPs0I upۛlZ(dÎm9mT!a\@9l=&|2lXM$G l/}9;=s {kYB*9G*kG|iݽ©5cz4.dX'Q2[u8.KRGQx}Aj/dS|g ץ1+!s5cDbI 5٤#eߤK&U+_ بA(F6,pܤgVɂNʨtW تfdc3xwK[u>*fuQ3hq;A>Z>iէweIibHaP>Cr+ SfwQ,&o#xwzaC/5*D 5;ʥP/WcR(Xt(?O.u3'qo wck34mtf~9^FW*i44=ȆnDݽ.煨c"3{?M~LjI"6(Ao¤Nlzg#iqa-)SK\S bFCE^]kM\EgdSMZ=yaCX"tHTgem39ͷ+\{p)I#A Rbٵ\CbVؕ5×)WV>sqa+8=Iw.λ?4˧ma]'y3폢+vyײUaXV+Q%(YF,BF'H M]0`#'uA B˗x^l?QXFDV )rq.pE$cJx-[j006s)ȵozz  +0UK kζmayVF}}#Q&?>هq{^;vL#ңsQ{g}YvR7oxҼXqk3r‹F ]ko+o{ȗ&iq)C{u,.4?$%ٖMR^[\->;36mc4 [)V^O<O9zRO<xkH47ftHwI&LuiQk՞c&j|虧Iq$z @o褗5X0FJ E-b&>%,bA<(-W}g~+ؚE_-ֳŚ5{S !)CׁH)ڲB4H_n].wNVpI]`9"ŵQbKU5`e$*vcOm\v6OgSoXR"lW3ɻc5Qᇯu/AA~O˲q͑PY9):=;2dT`*F!z@8€g =#[N`EYТ7Ve6;MN~dZKWӝ_MȝK(q.õ 6TC??O3h=K;jQbn꧄_Aib/??Ga>2Wu,#=}}QBι b3L<a Pu@q۩ǩ2!J9JpJhvJ(5HICt5;CW .EBW mՖUB{1]2Ү\vZŷJM{ztz otEW .*b*}W`d*Vjg*;]ZUBْٞ] %;仪]m=]%HWaOh杢jڳ)-?T&\~$3SVamU٦ B`'1yVcz\ǚȿ7xSb[4+SQԻlp8͓*Ys5/id;]J^?10 f1grH8+b#sX9JhQp9tpg֋D&M<3B%yQɸ֌g`85ˆ)FO<(8^Ziavk-bj5KmjWGn1z퐲ٖ ؖ -7 %@ep)ۖ~Z@P- SQx 'BW .UBW -NW =]=B"J-07hg*%r*lՈkCt%;CW 7P;vR!]1Ͱ;DW<}3xg|W }ߝVZxOWR5>K`w՞ +tЪRtJh%!JJp%Jh½1pj"sI}ݽd߭U"cgmI 9Zr;AfyKZNmJī !VCih:,h]"͟[!!#|gtWIh1z(1{I33tUݙ zPb7z6@H-8w#>tfpBZ@(lm@W[O*Sì&f:Kb +``'_ム6is(ʕw).D2/{Y ԼxeMuC_$7'{ojX~>~ ||类bT{qˊOʏa)=jZqŷmr^U]L|T N/fURز0:ƵyZO-;{W׬ s0#jĆEȊB"pHYU]^N{?؞^ ݫ)jiWMꆿb7q3og:4MMK|;\ pv@5U'M5ߧ Ҷb<^>KެrkdouO#v~e¸[{X 'q GB!h igRT\E1S])Pʍ"VF ʰ, Ƹ#aR XZ )$r;0t~X}\i{~Tf'%[WH, w\_nim.b]\Dk_UsmXs}Sk*TЕY~, VMj TH c[MeXEkCHMfV:վV#}͏EQmjf'5q~hoXDf^uQyL1^XH*q2 F9`52&Jq5aR/1H" be}0z)#"b1hT8+ם^h C>T6vr}Grw`8/ۭ״>9Ƴ>bnm~i;i8_]Oe2 $ƌ` " ;@@&SL#T,7bCO @ ?\%FQ~Pq&=Z8"JVDƞtJ XHjsQc%IXZYh6k?\gp'z3 ٿ7)8M]Tӛf*WwVzDp5onpCܹ ¸2\8р`9`@5d?!4ٳՏ.# _QOib/??Ga>2Wu,#=}}QBιJwlXHwɀ6HHDyHLbi|VzTʽet/dd, `RAn6ZhRāF  VrBr" $>A=cq%d.4B{}hx8wKH!WUkk*R}OU*%W~Kut1*l{[RA,}g1TF9"sMP5Y("`pZ1|= pJO"X}$dz)>j O*`2d .AB\KLitSL˻I<ӱ>h^<ǍDqv=Ip4КFq: Lj=aq24-rmW߽O?Ng=#mz9i66}z[w>-q) PѰSI=d͝uaF7L܎UzL{{`~L_2fVNUE'.8o`2Up]2Ct< MA4,נ# \\WWMyWBELمrf1)Σ0IohZ.Uit_ S =S:e*"rEC Coy!裴ϲݬ 'i []}R ?ZeU"<&XCyO oR(5tx͒^`L)4;6Xq?rg=MϬ\7Ǿ'gC=/$'d! T9Iv0>L JLfpT$ߴC:0}& 0F0äYeTCCleBrH;n/T(.U4C'ĭ\`p"|i2.^ g12ˁ,*r蒗FIg[ " Sc4ZQ i/Җ4 :'a.ID6AlNJ^elKG?ci(;K\i?|ǓԹ7*ըmn0c9&{^~JrSťbϿ}jX]i}./"NnV s|129gY拓mu5ɾQQű+)+bcyWO)?YyBC0['=l<.?߿ߗ|_Û}~x_o? :XbH$Y{_ ?o CZCx]v5g\U{B|8V,s~}3M/ZrG9l5*U4#D6E8^:,2@H !zܵ1Hwē$~ RIev|* 'o ã2Fs%rpF$;`;k;V*ict:y]s-S2E4yk}V&}G[P޵+6EF\.ׇs*:GUϼOY|F=}Ϧs(t;umjy>Xe)-8ަ"t _5xd`\f24nO) $|&L`DU]ԫ/Qq^!y^Zy,%+c<ˀ}sI3dɩr u*lFi2]N퀴^i 9k~1.?b7S5֡jγzS >VjG]~nׯebMZi'!*B.gmeFoR"NQ'^@f0s zD"GXf12Cp,%g2NIO@Vpя:TSݎXIۑDR?Oq:i]޼nf&@gv9d]Y%c%(^+ƶl EHe~ __B;`ST{FӫM{s2;7mpyx!bE{E߹ׯ-nWxQ7ܓ ;>zCztymg"$ͥ/㝲?Z\Os6οtwo<-tC<3W} Y7Knۖ'×wxƝa>myȌ9v ՒGy2jnzKʚ5W_ݺkK3|bj1i;P r$EX ƽV,\_K+1uX \ XV,:OsIg!>Rh@8AmX4.}84@]18b6P<,BK16ZtEn(<!R_30xk{{TE3"v뚺c>'L*Qeͽɖh#cLIG !G̙deBƬ !0DɄ$ *[I+^KE,#פ\1ʂZmyq~M~ dMTR#=xN9y~^Vʏ'V[2!kO;qfn-CG[4:BUՌښRڔ%z0h)aDl &͵+TmX5c=RO}u  >_]d|0I7x5Y/~wC'xq;؊d,JW ڣ%3pBzJmNNdZ(O,bݖYVfe(ƞOEmJmhdHHf].Eck̪E;̦i<];ڮvqN'a:m%$#JQ L PIpoln欴1XŁ,dBȊ (lb>$"E2x_YV#q}|(cшcOF\>oЈF+3$Vp!c9r2a&@Ĝf1 djDg9cƨ@r2虐B=Ps#H#E T#qUҋU1:qɾzW֋|Ћ^g$pctݳ {&x^DBE,,!jܱ>O@UFy࿫mApC k _$J}77c-84"4Aii莼=xǽV@{R?Q+<Y+ >E/M) AJ?:KܖxoG/;E3ŨmyP_%rvJ0L~Lt-rvJ# jfS#|ݸ=0y؇5 &3)y)^vztu6ÀW_R:^[_~lEoML?ƀKFky !Jg\6dh)eá끀bG b{RF ȍN:3I=r4" 4ERtJ uc i(i#*fmHLI ӂ6z+7!U1qg|l._o*Øˏ8vu1zn~۬k9W?sd; 0Ns 1a{M R$DBkDd:$˄PsY,TqЇB I]rGvcR]2ۻc~ژpUI4'7qH$I@l?Rw !U }T9;u-*,XW;8e9ٴCޜ/Z[^,D}n3iW@4 ,+yC }1;ķ+0zs؅L R u$kRNn=B"eOB@TD|A#io()/S24 #PZld:vΣUyfspiI=;.>[/!#B .N=%GGjحWvL =bk֫݇%F f|H.h;#4=/9b6T/vYlqI[ u*g#]Ԗ?a֫ͬcd!r*c7rV:mOi|l+[(fD~ 9)S[$ /1vj%ap&λ`o_Y &y  k,ʹ1d<>ϓ7/ ?.E%:p"v@RYJfP8C}ـoh݋}K:eS79B%Ie)JOdV0[(z ٣Ojط[:'Ⱥgbraağ7$mo\v-w9{7A'ړ>;<4jY9vbEJdEFo7#M[(µUI,w&;)s^zɻ !JZ0٩J53~g}~\ [ :}ެ+eLaē)4%v>&9k} ".3 ͈qǮ]RʾG]rVU^ZS&+<5:-,$# gP1,kuU""e|J2g:="R?. R;V9%lR0<\?aD=(ЀgvcNboͶ[&['-+U}j{ݵuDeCn.іA1L& :e-/BoEzdE_0̗?eQ$eY%=a?Rh9KցI:z52 e ̻'ṞyZW^WKN*r7ɯn>Oֆ쮹t_jSMlcq{)r((IE)kYl $mdt{"BTeN4 aSJ" ̜/u_/Y0 e,2,7 Xv%zbF:T Tv@eM Y&9y:>GB 5苰ؔ(LK慁|&A G@kBɬc Gt/7R$=g-Vdߦvf1=3L=ĜR˲Ns~'q:EOEC4` B7\HECʪ-R,Bxvrxs2Ⱦt,c)xE?g0ky<+ Y]6љMZWӦ7{wyzS˳zȣq4zG3~+{6j4i~=goM]?_OWE_>n!rٟ~8ԘomxR7}Ϳxq秾;k.'9| ވ9ՈGԚqy2Z<]ijj.C+;\U+*t@pU `ચ[+;\U+•Q`H\U5PZKpV^-;Ջ++֪/g'gߵ}4)?*#LC<5y6j4>Y^. ;P"ަShiؼrjs3|u_?vkyټn_6ϺɩOsc؊_o,`Fj!UsueEA~X&^bw_W7tUݛ#,첄A \WAi^wzgu0`]:tO]}k`wY5Cҳ[a rGUW /w`іJY[g[ ct}u`EѷkKtry,ǨF7x64qwq/>dnFB:(?C4T q0꬚PYָ}WgJKTgy@pU v`5`vTk}W+•7LpચfZ {Z8WWWB pes՗k՗Yk՗Yl/ pJg㷵UZoqet':,foQ=N"Amݍ?}}sJ_3ioCN8: -:r;"苙g9^LhWGKYmc_ѿߏN11ܪ6-GGi!şޱ~ !< XL"'CVDEK0%E0km1(IN cc~@dH `C9ءPv(;`rC9ءS/ӡݖ6ڜ}"A$@ꐢI,l$KN{ؠZmf {(abi$p B7ԆÚ3bǒFJ]T:YHJ̥!Dt۬ج5ce4Ō`-(R*ʘ,4j0kƎ̜5k݄iitY~/>Kfd묤EЭP&`3͂;YƟ{|Srho}s/Y*dGF89GH"o8 I ":VB!JEKГ#EB *6'#R" s?2* 1Ҁ+jfoj?4?gg㫳+Gl,ezt.eg|qX$1JkcvlM[.=!(*ljc`"H%Z+ \g*yRcnfiMb j7ۢkn@Yՙp iZ2K6@B$SB`R[*drѪxXXV %:[։Ѩ(&Q|fȢ:[a3s^/"+0M5VcKD7  Q}TsLN ,2`D !a=,"#)8#h銱R' T )DbkL1V`b8,2wu`\v ڧfd[\qQ8]#$}~^6rBɢBE F$\| \5,o]F}ft7{N`gwIW*mhWλLˣj5Mg,Uy,dM'LY(⶝;:t:2l"8DŞqFdzcDd=B2eQZL逩Dxv c4&BF‚{#hI!e_tNbYf2Ξtf- SHpl3sxpg|.&㏟Тk߯ۗo=[}#XCYQ=r2k', iW濅XX6*eB  pȸ&iÊu]N뱞 y=W,H(. e![PKFP)2$2JY@iV<` }%iւkEլ;$R .N=%Gpzq`GŎXYgLj҇#ZisW+9_`|B x41&[3*,!B7\t.Ydr"(R,B=vt޵$B;C@v&3IF?meQGd~$%K(2e+ g!bwW|fU _Iw ТQv۟^eaɢre蝹Wovw4#(KTؤ5>$)*aq>oR f?lE§> l  CpeO r]76M}57ovh):۠ŏa&8fYmWpcrN/Ԓ6mJ; 4@!hRw Sݙ׮/T[5CwLk3!x99S c@5dCN)DAP< .Hn 1mm*1c,œ[1L J&rF2#yyv}]Dҫk4ϭFh|nfӫ^%cD9=[*Ŝ,QhXPKÿEص#AHTi#(bF1ZleF͝QFG Jb0XJBSE99x#x\pww;=:`(0$r6 3G$c*]u}GU mi*rؚj08VQ I5T G@N!-8IAH(( UxVN<XϏlمl;y*+hC|ɗ 2Vᭀ Api`(72jĹV( 3n$€0N&o[kk`kZ^gֻ09\jBݝLJFX֕r'57hIr'cșU.gLD%^Xӷ}K>}@Z }q`S k오( (pK#a0̇Qt=$,p) )0pZ]g k "z4mm=44DIBa+G&[~}{wJW߆4u{e+9'K6}цWh4t,;g%%R, .: qOaX|;XqI阥EfT|'*tLtLs>F惡H!7(ZE\h +8v?: Kn9y<*֒H=gȩ~ܚPfb sc#?&In`SB/6r^rÜ9Q [cI"!t[@ETgS'O|FTĥ*>ZE519cje cੑ<`'qHr (ʣȹMI/tdXDӉ#a)$ɬKS<=ДI[?61\]iiK6 WNt>GLf6oy&eu㇓Q0H7Fx@:La\!rI8OQ4ta] z,StlI!̍i`(U3@AסVP H|EB# mI?8S[Ő}␞  ]mUAgT0BQ9G7m$u *EB c[MeXEkj~`S2|pڰL7sܓ#H)QQZűV2VQ1D3ln;(-S_!\/+!VEfK;w뛙5:/bd.}){L^4yW>+56bOIJ+=p8I-|y7TgZi:Z]vBQ͆}9ޗRHI!: `_AOf8r|Gǥv8*Rq0()e- K>+r/(l_ CrB Pk)h)%!ġu V bX4ýH#ln,.O˪{_pDf6<3J٫?f>t,%빴ֺ Vc㬆~2qHwgyy# ub3In1N~Ɗb%U|H#K@O({#m֞"h>HE&HʈXFL;1!B=a K0*jEA+EI*Sem0RS-y!I &E N&x[֬[#gf];}bwZ{/$c0;gKN% c(,RwAsK]1j݇ Z-HE-4%TciaE*U*;Ƥ1D^P'<# mB<8. S>V8eɀ"\B6%ao"XN:-*SEXY \H*S I0V e&X'At, dyBH';?j r  O=cF@阬z NVF߈iI&v}^\0m6𥔷!iUfSh6[|(=βliRWsvx=,n #M6+a66M*E-'_Js"= 2&:w}~S2|~;0 XKZ*;JMV{3A= i'7>?Gm*j$9z3EQt9Nd3QtmV: G؞Q6g5)fdTr_>T NK'tu6.AHY'*g8-B"P-c!B磨 dbgxnavw>OAC9ٻ!$7} WfZ'"O*O?}*$99%YGrKʝFk.%>d&~=_ifӕN+}ڃˏm-#ªCsdG4ìӨ7ԡΦ<}=_4L2!Bm杧Krsw"kϮqHw*Z.Kq bضyպż7c]6l{^j}=bK~5yӼ;^=j[?g髷0.M=l7BA\C>a]!'(YKd[tg?6yZ]k"?`y3:9p{, RDֹ8UNKʗƶR:2&-^y.tuͦ@O=ЭIL˕OtUL(ՓWW2V_Đ brN[ř O_]w? ؒUi]s8r%z\~pe]i}WuyЖc;띎5v+=yoRܨTL:DFjC F)mIHZ|6G1+ '^@A)j\ ̾t?*x)Rs56eOrVώ4#=d'z6YsHlMӅ 6NbjI&&g.\SP 8[43%ˇCavXK ]ZiLVk?~ g'1kg/˥Wk. W>R_9a@f`&9wЭٷמ2ybů$.d5J/[}ނ_VՊ‹ó|wɋϿkI]iĊ!RJ-,ɕ>=Y/2_xg׀aoQ8ٻ˱5rl7H`jfwzp{PKb[bf`3`3 ~Wp營.[V/:9P_ n/  翎Osg6+/h&˕/x C-l˗'yvǟW?~ǃ͏/u8A0UX}/WweE]cz5whW̘]h a ]$ 20;[UsqGr5Ϻt=/r+ 7?Z;;G|vQ pi,(CCg<<6ylylY\VfC<˗K zWϧKc1ɲ MM1i1iM}eAS0Bo?m~J)F=QGBUTk+U97oKЩ|z>m8m[wFM`Iijݙ:ZO~_G'x%w:\kw:򙮞$]IRD;DW0WUGԶex'HWV~ Ź+u3`GPzgztFrSb'gq^eF3.wa x gM^"-3g{/%Nʬ(B[&{ [=8zx@Q+o'!'S6]*{Q[Nc/cjsn#ӁOWd'6뽦KS%bŸ/?nrz||k$&V#S)L{6("/6\-!! ά-(!?k1rf iW誣v(7w=Փ`IxVp:`;tᆝ+j骣X{]+{ǩ{W={f8tu?e teS/^띢+y+nwJq=Qgz tw.[rI2n=&V`čPƶ B**$³~-EmL/ب'>:,>yf&X7FUe{e.s<4fe2GeE&G.l\~j-Uޔ(Tt"[Rd,e+H6S4_G~؇3sN?>rOÙ~)+pS)D18o,'$A+"ƙ8lߗqbbը# !lEtS*:W7947P*RL7cs)jS\ 9ccS\"Z2rg;&ZWkS`L*F7JpL+[ ɺ ھ7Մ`>̡l5\65QDqM* %%qJR-$Lcw +pfLcIccR֭E9C)Fe]q+ bcܖT@8 I+EC-hC3R5P:jJa0 .YShxXZ\dgg+Yau^Qp A+ 2QTQ<\vYyc&Ÿ~,% V!E>%V3Pb6ܝ7!PLNs͉xNU`U3f-HJQ+W@m)10э@ۉk}##ko"&ed} v# "KfԸMƳ*.zC֪)[Aof\ a 3,HX*H #KFCX{$].%QC`ʤUt BѸ iX/l).!(<%nECE EG|UZSalÏ5uXqxqG<تdKJ4A2'C1c0mVS&؜~ܨl UPhs]V0 ^6gFV Ny4ZDRZJ TΰWw@AqtW&uC` ݘ`wupOI1TbbXfی4c{Zw21qе`xHoU%|z&* ל 0ȸB#M!AH iLBB(2%9ihLq̒ʔM')iƒ Q Q>TKsv`2Wv9I0΄="D5 VQ rLhd_ J-.>#Mp&Q !t7GPFxԑe@V 2!|!RAPSNԋ,+S~X5=( E>\l5)-!x^>[u2$$ƗG !4PA e`#dѽ*A=ow aXs6#kE~|:oma!<['38Vhף`W'3SK;!pB`ߛI^ c0}-tXH d髒ћAhKрrYe ݆)Hy|"&a|`B'؅!)#DL+ȫ3 3Za0=҅a0G@_H^|$/)nM!q"pt? ,:7PZU`;Ql+ɒ3#/a$ S``u0MvnLWu_GUEq>e-`I'x__b=^#EXF$S>q9H"P)Eޅ\B [2MH Hޙ j0i(! 6#2ZN&Gv,0 <:A l@ ܯjKPC`6dT(@0cdMђk9~5#$e#K0q؍ha:LL*PlCV@;M7E2DME1,j !"ȡR+YooG .!)Hm-Ѧa`78|6֏˓<Ǵ6\3(hb`0 Mh- 4zT&`6R`7- >;a^KŰƬ#HY-:h4v HF1ó!MʌТ|_AD< _mFȇ2tyDࡵ؛\D;3S"Xn2"TA3R`JHH:(- = >!Jݺoܨדall 'V bEC(yrՐp&r ?"wV^0 b {Zc4* 򨞌Y8 G=guc[`pO6Kt.y ٻ6nlWvۑ~um4 M`5Yr-)[=%3I'h*b) $pVVJVRr4ݫ9EkvK]l׶"%ܗYUZ2cK~}=Ki],RP#b]9JoY-|TX6?LDI~__\LfͣZnxGޑƎ|HVڭ߾{4:eeIȂ}kS TlCWWBeg :tlz=Ǐ, 7pW>OWʗj7UmJ#]ݵ)5]e2\BWVu(@CbDQ#zDW1p ]ew2J8G2`zCW}t2%Z7t2h:]e!]!]edH ՛v@+M׮fjJQ&t5ս f*( #]iLf]z1T1Y Q.G&Fu75*F-iXb$VKQcoޝ`ҦOQ8Ho ג/fre;&E%1Y"%hjF)nI{ Y3\֛Vt~'izi2l%p ]eun(-ulfuمSv,#npvCXBPE ]ݵ)3FeT-+@ N 㔫>yWXqUF;]K+s^;]Ȉsڭ nSkOl!/`VWRhlY"+Bɠ#SV;EWz8eE&&yL;A IF-B գ9elZۗ9%U%9>))sj{CW.3}Vt~>THWHWVۧxh ?Ѧc͹#J`ЕݲHn{W\tZH;tL)mw+tu7y]nv&sw1f 7kpev88=\N>uYN[ͥ~zv½kUF\)(ҕQ'MM p^nh.o\Xl//㻣EO\lX/$QCɝHL_G,5ܟMV\O}1.ShU]Ƽ?^]^Vw^Lb͛&a|ڔ=Cn4=ŇG$,J(M 0VGF{W vhm rrtxuE\򢪌M QX (pNFh@π^V6wykMnʘ>NyA H "_cRAxZSI䀄0V89IE |ԖVђ(id {]6ŋ  WW!K#ggN&CukaWrIr'QP"kxnq8 }bȼep :F|r4y[q*1ȚWC+m(bBk4zDr z}vܮ6(0wiLURgY1j چȣcjp⃧kMYo?eK)KVQ?ViD`@`p xnيTh0 9 n9NEL!*mp6FKF0qwY/4G;yJ›Tp祉& K4(LbX\7d= YbWϛV+ 9χ,10lRKb^LUH$+76P swGyaZi\jXީ2D-4!FiOJPKGN i9atG HlF\[kINeU:yRM!=5x 2 'PlEihKyA9B}C lup.o~~y/N>{yB99㓗?=QoP``l!z/qYˇ[f͍&k^TJEސ<ηjl;o_柞i-NݴaX0+f+8q n~ g$jT?^G]@85!g)xnsΑz;͓W'q$xe8{sJSpL VU;M<?i$~veRTOp786nuc5Q]$!wT,{B4'4M۩auc|ڗj'ZKP&`%K:umAxWA4KbJ&^QE=BwUCwv;g8|6Jg#m}l,ʖ![FkmH{ZH|zxKeZ|NpTVCZv3/]FntZ@o۬q>%PXbV'-/xk"&DVMĜ7~r}޸NSX0b>Ib4<ȏ0"T; 6Tr͂~S7mz~<8d-+|nq% _oK~tٕЦÍw`u4^=^ Ʌ,b OAjYcÀ>~Ho? y*^uVʾZZn[<آH'z(Okq~UOAvܩd>RpRdIy*J;uv7W %}G,w>e"')*+I$U8 BBgj{_cY| >pA{ KqƱGHL+$[r$Y#S< 2ȨV5ybnJD)'S)aT*օDF,Q 2H'\trDP1DYDP%C(1xikfNlgS`KrR1K_q_U`s*2ŘdMd+m}!=#d<\vMB~Nx>N_FQaoyruΓN67̦giZNPu=]3Yju1~zsn2jwlCJ9$!e!2_v>Ɋϱ6Ɍ~-e^nrtO?[}jC3Mw_Dխ'󽐙J`F+zR9sK8_Z{P]xq{f6:?| 沵6|kgiLn~z.spuv8'oЬ_;޲9>xmWgjdw] ng㋿[[C֞^q];MMuł*TYm|RW:nqknz;dzGnz0qwo7u?7_+-wd~˫y1π_2=Ͽȍu;DQ} wCs˦&l-=mUzmU=8΅[CaH EXݕ^`C87 ZEi]0CF;#;DK BHY HYijܩIzHr [g -uPt03!lkSÜHYz2z_+H4%HtpbayQ*UuL`+?mI^j])٫vo:~wDm]tj{P*G8lkR%~iJ|>uZn`lЄX:"E&@)U{VRГDݓ:sDB`J;2@iQ6Y'5PJz2@QQ );Z6g\% h;&-w=[-;s'Cis7VsN;z!vĊRr􏏱E Q1vNZx .*:B $<,9b6TЈ l2$}M}c]"BC4<^m^iy0)jZ2aB֖ %)JfN[r%`su'Zy+4EÖj O`Yck*g)x*!ωu3f]56ْtxPy'<{cEšӃcep%{YJSnBvr wC{!KbJVT&`ElKpHT$u9#v<n6:Em5ɛ,;oBUAFLA!$$;u*J664 +df"X:1I%đ1?1"&Bcd,;omMf9ycf: ;j P lk9hwJR3C1",m06 SA] yath ó_tkBK/#{5ypK {ѹ?`Gxf̽P`"m>ivW_{ "&쮧W-jH1vER- %r.9'&/14Sjn??p<{k4|~~t3'px;y>h$/xyٓG(-/ ê96V6ჲ÷itj\#.'{]?3g{>߷xy^TX8@ٟڟUθ=_² vqFMP+l)29(VgK0}ya2.&(zjֺC@)$p7ZgsBYؑO}M3D@cɞQH h]"3JIk Kť`1XcYs_׹NWO #GvZ7QQ)w2kIsD;>K\HQxvb<E!g `J0.yhj51³vBr]XoRy Dy ! `:ةJt43qȳ?N&ebݵW_XܑBCW 򥔥Z)E>3S*/D4S5vA›Sckȟ-yS+w>}8aHI:y]fZ5Eo-2uؗ9wқs滅`7Pbc#_䡋Zn{y缓elty:avv?oW~/`WG-x6߷oFaom&U8~ģqt;ޙZOc\/oug=ڟ2~݉L~š1QtI!@"eK,Kd"7ǥܧwm'GA=;E+V. ɇ@I5`t גJB*;bTbq_y}y@\:Ʀbۻ ?>cؐ+v:% ^$B2H*Q^AрGA:x^d TA(K\LmV[BB20"IєX@oqq@9^O3fep&6߭n6sXW<ԯx"WԁR<&Ym`xabF8H)X x&y3 ׆<9YMJKcަbgBB3 y*]鲎wp0r~SkD"kmH.NZꮾ:9$9! ڌiq~D]HQJcòTUU]]gWEkh1s\HCշfo*`6CCw u/Pvd7 w }-6x5I7/A|gƥDsnR!@(Q M>J@zW."h pw㻸}1nbLUܫ}#[,9"d)+s{3ر-oOFi>l~|oGdGӷG4naAM^|_j/k'߿IktRT`-6Wl49ӫ8͆kQ‘p*Ibܚt슠#<"r_?g3_g~"b~:\h>R ln|_rN1&02x\hH&gpi5fl8 ~_Z,dƂOhpV|{",2--Evbk^=t[w*V'>HKRU 尭s0PHȽ`rΪ͙'_jOԻp]2ct k%C > } Bnt={[[ka>oaAZ5 +RˀCLgٗrި рmuG*BJ壤[%^  .X4Gଵ F.K|swRÎ=ci/k;@ӝh `}Z/:IqBjyL~Nr%;F!qfF(1܆>|]M`t\ ϕ/L":\ȸAfjFt2 C+0VTe_U֬ύ> ~ /?%"v| F"$ R eԡZVi:e'| pVq<wU5JzwݕҊ~@8wUx(JuwUTػ讴6BrW$`U]UieUe^2K޼OR~DFM*y>IG4P = `:ȿLg8?($|{ãFl\_]Zv\$IRArl+~jutMC\] 2۸TgEzW.1VD %O,Z+NW-#q]εk~σKK?̚ZI2/^FC.|18.z*cĘtb;0pCU=BZ;p?vR> NC2`1Fr}Y1^ ưF;`ULHDJ.zwbܕ3h!+Iu0>Vռ깸+w{qcUO'O)|`>v]SJݭ1wwWs{w׫wm}كl0yUK:nyb6MM]Onv?CecQp ʳT m G~|!X^=aZ+Ƭٷ~ 盯z}g״.Oj ~-@^C`y稪cIf:] nev^?_Ѥ1mܾxa^l}rDcū/yH5i(0|a[,ԛ60.e!n2b/>vZ$VJfP:\R$o:*dFs\%V< jtDTT[A͐CW)ˋrDuApiŃU&{٢ʐcPPX@ %xoO^{ owNl$S{;=zR%1Y-FF]L t`J zyAF9T;i9tPd6V-Hk@^_XMAȒ M5G A 1+$6-6@ f( *Tꃏ`آN {>ڏmRs֊9C RC)ԃo巣]~e8 &@,(U%3}q`S]A0iQX>RtI/f9KpYI.AWja)+MмBЍ7V pg]_jM'VtuGm6>z(A5[ɘ*,;Tp%y`rLJ6*KHP6F(" .&֖$,K2t)EA+霭MmԸgP@U9<z )'tG+mgWwpk9lE[;={*0Ԣ #qҢJ-ꪔk=u/EXwbp4;1|p7z J- f zAT\H%X鰔ZI bR.8ϾdFe݅,!Qҭ/JSh(`)(xZ?aᲈswR՞iٜ4sv=z[_=~\aS xwoWO}Ds-&.c$Ou5+ -1ufF(1܆>tuMJx|`jA%2nڡњbK:mc[ALvTT6 f>7'/ ֋5ࣇO1XFܥUC.{B2ؼw)[@n66-&X KeIZ19%2k}12A_(&Jk^xnǽyKSfGwOGq6Wo=Pros3nzzfTbLQAV DIAFt\'tk$m=£xK5C1H\K$`CThdDA ͣt  cJhr.m=S `()JQKLU h@_8^Uz 8h:q: rAQNg tK}FUuh!%!& VJ)oWl모"QЪ51bij7n0 Bbor1V_P&u5818ûtŦɾW]O X#\(8ߤ u@ZFp5}k&kk~Ŗ}CP|W)X.7Au awjrc]u)jHhvV  dy o AAaV^r9` WJtg?HVyV="nʖԗo fat V K6Rs+jӍwTtEfO\{v[Us읫rO%t}T{w` 4"$cH>mrZUR?=ݧN؟@wn֞U.U "0\Ǡ&E N&xN|9wCZL>N8/Cr(;p-+e[nI7wwSѪCd z aM6H.*sxa G %m^e ^{ILIxk`-( eT) k zqyֳF΁nCZ-HE-4 &=WڸHE&!:I op(<[څPH-* `XE0  @/jciw qQyo#.jdd, *Z9ŐElS;c \KlkI> $cG$c =ޕF m( :m>v LGWfT]9X_2NE+0IQN)93Z`|<aL~Rٓi>r(lJa>U逰ZrVk/;^sXeOyɣv@2~z/o,w,'DHKP/u@.8XUH`s8g\sL6Fg8Sjc8:rt|OZޟ랩Nfw{wtVwx\lcw'KnzlzC;NgD%֖g&5w@w1Yl}[[T;k0{glrռsn77Z^Wimy~~O{kKvm|aZ_z|dQ9[f;^ z Q%;e B6aN;mO&ŏ־6AP.5fu3sYr(#X<] 58vg:||kc{xnRv#G<;~~ϱ~AV!po${PFbȻ~C^+k)'GA}0A*1j tjgwkT:d`e=0Od(\0Fyz Ov$ Ϋund>~>V\*ϟ/Mja*RoϯB0;a.\ȥw_}0j Gm Fk;,@.*@Yj"ge=φ5<ƒ ƒ -$IB. mn'y5-ka<2Tl5am:J})nE٬*,lÊiۯ2~wzo,nOZRؾ-aX[cZ[+4e>.Vmp6 WM t%u]̷_́ȸ\čw U5+_~@7mc&% &$fh|Z#$/!;.&ҀIbQeJ1;IzáUyy%=Ñg N-8QD.#rD.TL]&1#-H\s^O3{uk{6ӝO#$zЫH۰tInRDMþ@ 惜+[圏 X5虜1sɭ`)$; ^-vtt#?4sD)5BybBq#;OSJt)cD-skDc>quo?v~G$Xd?ܿqv. =:” hh v>1rFA Rbr ӂ#@C d:`YNh,ɓ8)!@5V`~yS!NOZAQ4+|#WӔ7ޯgw zEIvZYZqMjSMR%(jJ'-SLccf#'Eܒ 5)sFjΪane*L4:o]/DЬf!vF0x)sR9ᷣy%}# ҇$|KqY:Resa]Ѯ$34S)g+zލ9'ڣUڋ^S0 9h!q'nEM Br!cec=E/mJzon}dglUg w޴d`j6*&qࡈ±)Tjm:w犔 To\&HE"r V%:8޹eÎZNf evE>dt['[fY?Gf~ޫzvEG<|=吼eSA"q%}XtEFI[Q 8W (S'6.W1V d VC HFnqIP&1DNGV#.m;!n;b) n~|Gzu~\a٥jxqoR~0g=&@+7hvJ|;w\ۏ|ǭ؎5Xm TѷSЮbV˔sJ$5bK ܶG(3< p@Fb17XoPlt=@2 Oe2[\Me|tZ;;^Z;f/04q+of ̲vgČ||w:ŰTm$[J(#DiY"Kշ*n7H8xfP#Z^q#Ѩ{;~&ݫ![ҟ&BY]g *&0b6hMNz(m"r7jgc_ٜ_}6x}6wg?s(9xs!v25;tS"L^ہ1B4cǕ>MYתQ}μ+a"xs> /aO]7Kۚe!Q[FJڵM ΰK*T^~:< a*nc8+1(_g-0\Y\_,,JeW&-_|"F2Z¿߅;zG߬t-B^h/E1@G99]V$9oˋ|}Y'Ǯ DVhws^o&?,_Z|"ad7#?.6~F],3g%|eu;{yq-)Nd5N>M?ήMm-Nۼ ;$j[<j::jzw+E̍p•ao@\5pઙkPYιpլtfp@p 6W]@?j޳fWN!ڛY/Oϗ#}29N~NgbYΆV.Ž|Ew&D> 'i_OXNz/8)oc:zxB'#7r>=/W'jW^Zel"QMdpN81o@O,IA+3\?],u{N?|2Zj>E>HGKW5h@\_jNH䞵dV;^Wg{P̣Yk"kSשZՁC4M+ ?.gp].? og19<@QYPZ A%d69daslTf2X5bD]jv@ A3$E=EAcqWg8(Uٛ. c{yDר;LY_ o\D._mnpKp%rSկ@{X캩XvB3vtQ 9*AQc,(ś<.96TɪR@ 0Pqz+KTj25+T тOI'=61D0q֚o| c+~ 7Nr/ixyqK*⧄m;j0GVVQF+,D=r݆j0xXbTgo^IߐcU#&ft6J2KHc<+/:r(lv4FyYܷD}v+[k>U)(D >w+jlR8 x+1(}ik5Yqd#9=@!pٻ f57Ѻ3mf#n=bc1/k"O/NobENZf.CECcSl#hֹ(Ŝnr^D`OTW."f9F+Sz\2aG-'ņE0=;sFv['[\g%=Fo:RoC >drHX]MƩ 8a Yx K,Wh"ƨZeU Ԧ1\}JZ)-X * 02* 3;[Sb/?6f|ccLJ?*!O=?}`/8;.Zg  EvFhZkXEmאb+yYW^Ȩt.Yd'X yȓJ;#v7sF8ks_P}g#j vӊ/f4)޲&uRAEJA!פPdา-Uc]BlD! 3l%kD'&9Y!QBYTIDuq!tn< Qـq_IEDR#"iP%o3BV)cxK.D+BCU)lҊr JYG! Z:;p&XGtDI#9Pj zOwe#YQpq>kuv[%"tEqqM+ VAEm\Bŀ}}2vӑՈ}nc[e#,qydܥ|ኳmr~xC}j uӨn~Ps/C cdjv \ TEVY.E9(̅QiecfhƎ+}6\Vֳbu_3JXH8#ᦏ4rS%*fYHQvu]tN!Œvw8 n:,{?[2d=x@ &Y`Ɔ.ûu[meNEQ7ՍUݚv "6Q9㘡F SZd`hgZ92sp#"H!w{9A> ZbD2I)RH4ڑa-4Uu=҇@\ѡLV<o9)̆>&oLPGa6vwUTBFNѤ`hYj3 eKTh  qqyq0!}m> #ܟXӶ݈P t'wiyq׋-6cR&RRmeN&瘶5[ "o=/߄tg=J;m4rou[e sb9 $9ILRY8%DCPho]RugOykM_u\z2"0M4,v_w;6t7سuxa`Y̸d1' N9 >B(kɲDMpOxR, #01d {3p rKE:TSȮuҬp]2 oXq`BpF!(%h#Q3-1bK;'`8W3K2XIl晈氵}1*!Hڊ2D`lJ2AA]pap`4GC_p}Ѣ&nR]L2d̃6ȡ~5E=3(^F1'3/3Mň ϧ/zi4?ķ㺫3OzkxrOq'X ԳJU -:kuo_}fP)zPa@hx"EmmZwqAc9iwLһWvmxCz||9Zeo7?5.V z[-#7ddoT˕U^Eecu@㣊vc 캫%Y?ydiPN&t"|NxV\kR*z1EZP?O/GU5MrH/&gI4T]qJ!x"n^J@ΟH=;ZK !C.ꎌV[A|! {س_{I}?Ͻ䖗>Ej},&ZZ9. r)PiPH"{2)n)BIםRLnZ83R kxf*ǒ |_'S-b}1}l;qsKc6;7ofhj\P'#qJY*0O䚍t6&qAokLv~Fu&A AϹ8T|$ .#pkH ʑM;q+t]ww_-}}-o+$KɴOce, Pq'\D-$O&QPdr¬/d_%HG )?` $dIȱ8I3p$ :ng7e@qV=4YRAy' X8 SF4`o WwHeGP4u?KBH1P)Qs]\utR%bIWlș0XJ(s r8b#0e"1q&*6|( @D&)aJ/I`ء`J Վu u!r!rn1@=/a\G, @8K_dtX|0]ͷNcYb.QCm,V#.:,1@@0"!EH@cDc#ת`Z٭ ԁ/TidWvzGm`R9*tfNz 4(5rkB,"Xt0X_oC z \*(uLLI)eI%$EerO{W 4q;1@  P[ iG7zӶiN ?@zJ ꭍG5/rOi.Y *zS d:@5;pcpƵ.#{*TA:PL.b>*b4Ix@1IyD\H!Jj*uz7Y5 y9d;cJoݹ[sv cCuv,ݹ<9W*iϡ7t'ڲ|}煠1J2{?i>_g9hk(|M"1Vh]>325LTDǴ!%0`0c&xe ˉSa2#( $]oqW4UPټg3Q>sA*6 ALe_o7)p#B8uPdrCM{`wײlr~ 7p*ѻĤ1i Q%fo!*Ӥ:?|W13Q/r/_}siOs37@tJh!8Wڬ ɜP<F$xaHLR(Á#< G({c 0G[pì=&-$B&)E4% `rȎCq41%e1Dճ/ <cDR .pVط&!qw^Ფ酧'C;jncx&2%r$l,fs**5&0c!gp<,eFxiXO'8k<)I(XXeRZ >Bm2&}A%8O/uO,8~a :岆9s]eoSPI Be!aB26P}oT>C nC SE}Kvpw`{t'/YxWƥT,c1DdT4.(ut|$<1 ll!I.{[m=\7w/_>_:s %qIKWLG~^L/.eUӑ?}?/o3'e8p|dq_Y:ƦƖ?cXXؼnl Nu Neʍ ?F*sڭw 4^&qhp8i6[lU1P#;RWz[ni]m>T(_b*=~]4m6ScTվȈ5]*$ q_W;ٛSWj"ouz1:7gۨtg;ۣW_-ǢZ.Gvuy=e&of/:&nE1}o ގ55h틦4ݰ Bh/˟ﮌLarij-vuKaî2s1?15w[ŏ'r߿~s4JyնT`e:Ytcy.?c} N/zH1BzgO`eKmr2>b+xnD7J];sӍtnMR:|.ɨ*.[lwTFe~XrySE喆V|`SŽNJ 1tƠyަəG'Ƿ<$9.ϊ\ϜN\{)a]qC #yQf&e2I ,G~пF^K.ތhIMlp62[K&3)i,D.V+eYk0I=#_~_Ewo{cKTto:N:h'@yuy!{G۽ JQ>i~!?ioOܒ4?{6!k k4-Tu& ;- emN:5:2h-DLEtTVhKY1d IjON[eϠCI4rݱ,[5%:ePq!{-%ri :d^LqkuT^Iq2Bw9ߝؾxǓ$5-3*lWJG_kY"4R -DK˄[3ð_d^9ct 'Aȍ Y&C %ze2d+#_Bz8:%P>pf@dmq:^HDT&K.dnz-tXM2ﵦ|w$ $~x.GԚY񡺜W࢚~*yWfqb[Od*&+ÕMUMsDڪ{N]ݓʎxV$]LG4ԳRGܑ0P2LH&.щ5Rr`>0r4_WʃM]K%OwKCZrkcă$߸T$fh|Z#$/!;.&ҀIb$Ƭar|9Cw8 N-8QD.#rD.q=u:nǤi^Z;:-QhcnwNϒ/êuY\w(T̳hաgi0+%aW7!Uh}v7 K}UU噓~vߘMH?쉽 LPR@!ˏ=NZ+uqRVǖY(%,ɲZ\g&$o.N+?Wn1~7 Ԅl .4#x φ8<+O(z8H ;'P ΫXpO}lp@R`vEcUw9NsQo*mapɲv&0 cy'?/$.gEwLȓRt^/ЋMpD fHJ zIV,u+ԕ+z4 %hUYȡ+-H٩$"TΗͤ?ϞfwЌ  Tertla!bf\0;cd|=_0rA\=1E:XozӪ2d~m[pS[ _,jﳥ~ׯ;kHE>ג)E 0w[3~RYi^[Jn&!SmΔ'0)T[OnAIn,/"EĜ@+#IARF@iɈ38bu4& (=a K|q E !(Ƅ #52(BQa8-RLp7rCklGҍ]}@ 'K׌;gFmڱ, ;E` G +!d;°܇ J9"8`[i5X FqPj,-WڸHE&EEgDGI@wGdTaKX<%T~l%ao"Xy O)PS[5 K| *Z9ŐElac \Qē$nC~ϰ^tVd+֌saP$R{*‚Swdޟq_7fV{;nB[?K1|prödYd%<|aAEGx-VK^~;[uA3O9=E4+_￝}G) 8 S0>ABH ۻ*D?='iqC~ȉR4g O+Lwx1xv;e''wyt'D,U!&0q>jUT!8e\[6ĠFĪy9&r(4Q( Ȣ`;X+\$JMF|klหxإ [pa*`/.`DX܈` Wo{&bX][^A5 cp-@$6\ǜ);L +$B. K;Ѳ< kyK nH0"S^CT3) .1f*"C@>$(wwfIl i [Đ}xHuux|t E̟߃FWT4XP)rP&g2nr/*Zk g+xG5kā(XJ+A+(sژht32Ǽ$FDI=n\y0wj)VmCf@u:-}HX{,z,gE>Kպ%eLIYc2rKA"EvHJ,T#F0 F؋FHs.q:t8ťQ kd!P:ΤTG'_ G"ڨ0Uŀ%T8 y.IXZYh6sقzi5J$.J5H58%%6j(NXөmCy.gIzbŃhPQnBZa2FNI sFHtJՑ`u3O.}zUOy6!KK0O!I&LuiQk՞c&j|4O+'ui_ \T={fqN†1R*(g({kX4āyQ B [ԭVARg̢/2hX6CCz(j۲N o/dKx %p]~΁qJa!$)+ ȃ!iT |)@Yajt`zw<^.F iEύ}/5w{bO"9n,u5t!vq]l{JU6e.*$t. Kʕr${.AiFfPlgr Mj .5|cVWѳ*Q'X:UՏ*#meXQ<48L(&=32<,)[N>7˨te; ӣ^Y/,}IDn;P9M߬\ _`0Y5Lq/7հ۱NaL2}Zhٿ_N :q圬Sn`4"$cH_Z9F/__K._4c؟Os!)\ y:Lg^3aES#\I":׊K3ŋGjFݭ͍nArӽ褫7rE] ^Wrt\}PJϓ8gR 4閤<-ƣu95,l#U>SRPzIAEaK ڐPFde1* q^*20B tEKjKi':i.hv7| l:J禩e6QlIYCBm=Èa)PQq+7oN6߶G`yG`d 3âҌ;$``!AyZ޵q$/H~0|N&],9CGĘ"e>ȇ߯zCHTS#NtU_uu~j >WS814;ZM<PC>!>P?xe HIʂo,()-"DnD:Z! (bTDJF%LΏHHeZ40&D92h_kb#:KMQŻL;*`0+?< {'qC?:(.4{;X+{a>|A)sjZ~r$ShF{ ж>oA(;dQ]x5,C=$)Cj '䊜k9fpG'9jm'pfN!S^@5dΖ08A8ޮFo!` j;\ڐ,G GEjKt gjs֌UF,߭GSH͛voϴz\ke/Mb~ն@̥0p1[novASmb^zuƫ궞ړwtn'TImpx`'u;г>l5}mn{V<Ȭze;:RG43_i=&O;7L︩9fq<ë ޜ}o^󇿿>{3٫|uK\u#0kpm˃|3ڮtmK>rW򒏼OGP{5Z+o?.x9 gMvț-7&a]\ _4zMay[K7 \ڄB9N&>'K:O''/sJSZ8iZ+N3Oȓ4JEϓК֞]t:t\x0W59 6\[A~۫o?Ɠ-){Vixͷ=+lh<yGiկ-%Xl{4H}=]i].F|U_+p=J92dEy\R=kv!USv%@YʗS2 D}ҚDBT(BB/q⥄ELʼn!iв94P21S ༷T,NH]r0Oi,⃽q)ctb"Hi}d6%eNǐX>PT.9vrewB25~8뷃7%\'WÜ `8.f4 TN1=ũNop^'ȟ.Χ.™r8J@ Bp7u#烤9f5`9,d<~1 SOriC֐-ap=Y4gp`٫Q 3k*cJQ {PVXN)u$8Ad"|^4rF05kp äRq:ۊoШ|z]oluMyYkdGh$iYu:%kʿuHM\tKȝ5stՕ˴9Y(M'ZbGkB98 LD˔1Y YZ5%Q T68K->;>O,MYvڞ<A۠,Egy0Ϛ5<_ Lu%tn<x$L˗xkv7C|zs£ Ϲb Q=HZS9-T|FJddp!!^|6Yq[gxKԐ?kk@;q%ɶ奛}͂my5,na<݄2Tۆvl\?5~Qϗ-?qAU݌ߙ/Y6 ^à.@;_ļu7{2"}/W<zΘ8Zy{L4>fC%<$NΘJ"+U^ ))0$_9RAaA~=:"!"0cdtQ{. wDhHTQȈOr,( CreK-ٟ_KŁޱ_n=`=)norvELTHvA8t+`9n8 ŗifGx4#u`"mqE= ѹaJIriiLTAzJͨ蕷.WBhOS-~/yfߴ1l sϑOk7 &j .",V^i4Ѫo`t$:THt+I pQ TZn"2D dyA -He#x}DnR s+ MoFaZV攇$EL|D*J-d8*ScΕ ڲ&*Ԁdd8.i-c@ЌK*4.^kbB靫Rܯp>7,9.,6O"ŏ;~VyMκFl.,9.rLʊ`qk9ɢQiK#.&&N&cEѤ$R)\B#U2~XfơG拌==|ϮgaUq7Tn4| gbKj6yN iW NO̙-KJ1Ε4:+@%Zlnx8 q&W:($j% m [bod(fWvqնv`w*2)5*Å<)"8]1PddiY*[;I]FR !ȢybP #znbsa1q5o7"aE,6?*>v>qq>RiyGB"(5ND,j(-ڤ4Q LȤi*tUbr=Y)KbPH E. "%Br©:`j#Kpw z +h 1bWaq=d!;LElUiwяO(TxcTMinޏܝ@-BCeekBGw/axЈ) Y;*INxA>)'6o,׉E+URFĨ73o-e֗ AX&o;)PiJ XϤY+1(SVBI* KG.ẃ4˫=ϭkͻScǜ~ކ53&}lΥ2uu4gwALW*e]V61TZب"Tf68-dE#JA_XIΥ&RB@*^4,x8`eңBD *P-S"SZI?}nxzgo-Vw~Ka߼yAN2EMǴUԈ&:x΢#,,7x‰dqL҆-fX}ʼ+Ip;GMIQ))&g)~O]44.lKqn&pTMV+RhH$mƎ,$uw$QIP6Z*CYasjCKєJm M^`U8&DȥfzV.F)V<<§/kzl?xa[P+_ާ%ddJ{EF-T!D XDg0_\u@H9q`:(G(aE=mw#mC=z"e5կyxm L^ބJ+4)WA7\ͮǡ["qe1$4M p #W] H1p_9xEO,e !o^e^iOiޕ6mlٿI&*TsM8_b "1,U$!H"-z@t.mb$8beDôNd rk(K}4U`u!;o)e:R݉\]73mbH9Cv`2G0ăxZ*b?K+'%!D,d<DzThI~?Xp$Z06E2LD|EdӍox)<]38Ku-t)%n×ljhuV', 1:R]7M&\3E4Hbz? 9~|HG C?I7 O ;gM_(<^ mL}[8r7=OWgkmshLYyC 7PSG̾iJ[8.pV LDe> ?ksÚMbFBam52RPͲ=lP C7bi$OR$o*ajj&0s03GZDܱsO|ig b,r<\-Ajyéu#t8 bUUa}D=z%o]7m 5ST$Jh,R/# )ojgS cKeXdUj)UF4JLjef}B҈j͘`Re1{ ^g=y`Ea O>~|{b'â]%wCKm L-Ys 3!ٿ]-BΏ˸ ~pnR~9nkf۱{KA.Unt:nXMzO3= W~s^jخ'|{WУ  ޻"Ϻhc%O W_/O_U=XZmNWuk\ydU-7X b-]m+z;#Ho j|+@+i:]anRNDDWr ]!\i}+Dkt P2fZ:A\k<+l50tBBtut%UJxDWXSA˼l<]!J[:AsAѕQ ]\NIZAx Qֻ:ERBZ#=+%BRBWVm s Ή{0FI?.*bxMUr>J0P3# 㙋Uҫc;vv(azPJG|uȾY V :f.WC,N,ktp*nP|w,irBW/]%.,L?ckk5 Dg߾zq}%H9"*QxNB$KٙV[H Aĭ4I((E1,h8jŹ(,T0" ELlؗ}fL~,X_f˜4ʭQL0y>~z,aRwamF~PW*\//1RnX݊y/x^yw q2.G2ٓGx<1ŭ 4u )84`G1۵ -Fxz~ڣsI&!hjzȈ(u[p!]`F+˔/theRNDWSp7thUe[W=D`~bҚpZyP"FAWmEOQVxDWX=wU0Ut(jQ)bOS ]!\j}+D+x Q* Wr]!`+ M+DiuKW'HWBIM|+, 5įJAZ:AY=+ &whi:]I mJiaOvl7tp#'BvDY>+}ڦX~Ndxj}Lp-$ ^JckІcmR푏'$BʛcְҶ!)ϱ#K틮 BjCS+0ʫ3>1 G߉-k|m7-]=E/HFpXUG>V9ҾPچvt%ZV͍Gt 5T7%-] ]1KTxCWK_ JtBM]t(tŭ 'ByCWW_ Ѫ-]"] -VL@G1||ju%ȡugvt(0K0N]?댯-vI?Tנd=xvH<Lfw""tc1T{WJc2L?JCEY2’RjcEaf8~yH#kY7{p^FgU ^67whxBa4Qp`2ֳ*[qGkL/Iʋ)6R*W͂u{χ?flyhl[ l,m` 7Mv27F)"- Ut P_W?9 {NH¨UHչ:A$MdN`0qܨ7nT3/< ݥTͨ6w2w  襘~ ^.-UyW6~wf!MrrWdI^^px1Zfn=^]^LW،|n:hfi0YiNw!z#%ՉJyo~CEqsKίX^ yip[* &]{{%MIgo(烈ڶ÷":GE,iIHcGYv4s(q!1h`E='/"uiL O̟'y> OfRswܧ)JkoћnX^;^JyԊ *q4D_͢ʮխ)66+mF?}4ғ{ڶMtjGاR|} 7J{rs=un؛x1w-Jg١T A؟s0@2^??,Wp€[ecXa@/_w}.%!et-">¦Tib_6r"֦5Ss0uϵ6nv]x:ԕ }øzp75PXAm @zr+=+,?tpM+Dٴ[z|k]!`+/th m:]!JZ:IbLoe5p3Y+Iݭ uTڢҷ]TV0q`^b1I(hnsn[pJ>"岗Jy )W8#ۏ5I((E1,.^4ʘ ϐc=!`9P0woC1ʣ [Otp}.y=>b QZFg'YxoW(KMKWӕ(z%0JNWcoU=rZN5l:YdKWۊR5DBW5eiN +N++/thi:]!Jm[:ATK=+EB ZtNf6BBWtBTtut%zDW# j ]!ZxBS+Ť> ]!\f|+Dkt PZ)ҕ&\uyU1ە_ H6n? 5#[ '4@›*DxE9I[˙KQj)RҖKQpFO0FBӈ!d4mzDWKrsroJD+y Q֫|:t*^u Fo=G_R=rQ-XKQd V5Jt詀Gt et(Ktu:tSzDWJ ]1$7-] ]q!d ]!\|+D{hrݝ-]$]iAWtHJ )+-m>]iѴ-]= ]xvRpcf'X:S>W-^mĵZd? %eZ){srA ¹fuη#mO.hCXYJM=%ܛ RKthizt(OuYp ݟ|E'X]o^@;\OB[%,Va5;t@ҷ0wl~i]P0sʼG`ܧt<FyyEϝl_.O7fԻZlMÌ9I•0c<g\Y+iNayATy \qp <qeG!A^Y8f%$TnO 8>($Ǹ-MI^$FXC20힒JYӡCjS嚔s0D6]جIwT[^wT$BI`: 4 &\0w$mԤd4]'4?]~{ٞ"AžaV<{1֤1-Vզ5M; &^.5k_h& ZP8z =KJj9̨Cή6a?QQ"hUB(e(kXI>78΁Ҥu9$9% v[N}R#%|PBtC)9Zb_"'QFVRsN{ryٳ oN**)e 2AOk"1\z 2]M*µd{D+ _Q_?{FM6~8'E,-GI /-Y-ɲI[pLb]XŪHVU+eݗ=K!ٷZ+e@w*9! RX#bl2R(!^L+B&+UѶwj ~aEwh7fu2+c+IٸF~\91]֎vLmǴ|@sVZr+#g28L9"IH?T}f\=m'v:4VQ\Nc@}jZuf_m_k`q]~hq=οp%ܭ%}o~gKXZm2_3#2iI$}ХPOi8;ySKaF1F%yӌ R7>baזb XPOH)/Wh(=$aE o%ĊaP^L )@vXoz5GY?c~Tيd?b3e1MO-(ITc'B7dj5O7kiIJ8cJBd, Q,LPB2'1E51zJ}/E?!}í#cs|7KUU)?iU +jQ/zjYK\~߾L.uEy2Y#&,M*Sco6WYʶWl Ѱ2 Yxn{<ԗy[)V ɑzJ7S S7Η&.7(~abiW;g&yI#aR2R C)c 8QL\ 50[rVj}D#̕@K`BzGP}jIԀ(9gRح_▫5L:MN@8+5Lkp!OWC䬆i,^jdV`@p;`ߠYE c^F0lJtٌ81 24L:R g5Lkpq7DjB^# km߮StVzJe AU;)x7:= T*b2jqa56HjJEI+$Xdҷ" e֟ӄ#lB)Ł2` @[q  l,M ᒒ43P D2m\p)pI *o,H0$q5GDa1C&g'&kU*Gobdx* \/ &gVV*c0I)SZ#I{.g&@pRS$>OtMTDh~.k7s.i=6.UxNp3swj:w?+F6;nag;(u4/xVP g8/Vf;v{>_pgl*IfnD#amT .돦\38X$BU!k~ V?>_(Mʥލgp-Cم%<<A&[!r\q XZAR "+餖H=AZViX;|E`(Bwm+'BZFpA)ӯuA W/5'69}@fחX f_FɂaX4RW eT-QBRQS*bK )akLQphXF50ɀ uep@atj?jXAp4~sV+$YԔ`"g5,P Ɯ@ѕ妵@5e1рa, CYԔCL`hN] #v?2ݨꖲӤ֝<*) >k CZ}OgE`K;_^]=uW8|a,.}S߿Bc(jݗ>˅bia7x&a0 PGD;N,e9/*\;IABÁ`Lh')WD}Bѱ,L\0&dx7.E{րvɿ.ҵ 6ly*G:/W^r_P"P+\=ݸSw@LJ*Ms.b eW2Ol6wt ™ہXȑj׮OQ'{ɊR LIŠ{~F6+ɝ#q{RsqQ\<\yv`1CHf(u\_YL"ok4n9.@#}hV^wz6 t6%3&ʐ 3>;tÊ"~*=9%q8j<ұ4x%KD \8FG|QTY0<az^NZ.}1{c]F8?iY8P1N㆐ϦbnD c{bN{x,fǣ=1=YOb7lu." of(ꄂ&ߊMgQz,S."y^$"ѓwuiv :0rK|l Czsta-o8a}Ho1ٍ$t#%w!G }1Z18I󅀯`CďX&zr_j̫[nn FwօNc,zg`o51CN(O꒻)A[0G'it.(à_ޕdxW8|F(޾>kCdݎGbSr{@]Cy;r+`guNB(-ɓo?(w"1=~[{/`! U_z3eYWJh]n*vuKgy=,g$˒wn8$ݥbx;\8%8 zP4R{oKo{3^l,u''_/%=C׳<@?4hK<цz̴ΦS>p2V,gdȭ%IUjQUqΚKxzպ,d8%3vㆁr٥GJuT0]5ԹW?dgTʢHz\d=Y5^Ic]޻*с?Q-A|= buV<܍V1'#hk)0{'ޫZNq v":ZxZ}Uy~'*UʒۙnIԽ!Ю{ ́F6t;U' ax% #\P>O~'=c9EYEk%4d(+` 哙-:w8#mp ~%k9 )} $YBF8} /=` tC1#U9J-Ȓ!atGL ?! Ժ("y]bLG2P ?>;|H<}{ɂ94n#mѡE~ XF U.4n-GQ=:D=eg cM%0)[^ɧW*Yӷc9<~W7u+~nC!TPG%t*s0ITǨ<?: : !R|dnXh~y$(Ԗ=!ґOpW%!èF+WNPXB`V޺=:~pE;k.踹#TQ`\j3.SlT<@qdD 1XHvǿ'_d@TS=dI.ȃ1,":ן?ަ~uF@7 6TsԽ8Q+ҧ֟cNQ-!z ,^)3=lYdxd~eBay9I*܅/pI j z[J0&M`ᶒI8昘$r{rko8BDNX@ϛF%JnRHCaJ =lFL۪΄bn1뜃'm~jh,G>Q,֋G4sHezڧ}F2(TCL$}.u@A%iCR 8dU>`S1#S3>P˯K;(KD u54&j'YΖSei𙃠JX)6,bɉ.Ol t\_0^e>sRjh[Jjcb4-cҜR2߮'mz,dqIUچ;6I' U|e-NC nyǼ {6n+vsgZvnzT*Cm)߽H #J}濭CƷ,Oܼ~]g -oļ x6:/KELgqL% \FIms1ok)jMPXJ%$~UfEk<-!7cIE,hL.$9i9ټ}W0]@ܟmJ32=60.T5u`E;筚 %}%w[@wvb),FzL^cLbEn@P(fV-zzx;["Fw^S% c{'Vr-e)fl=V@CͫNLPe(g2{5eGq<.r$P=Ml񯞴ǜwWռ3aEڬ`ScV/߽Xwc׭Iԁr>KĘ^DP>`VYi >/,xX΢OHJ'ɻpx{(f›{&NRjpg?B(T*(ߦk₴Zl{IYh*QVł{)ؚJI_ vիh/]Hy$4Ն)`81aeoVIE( 9: 6@sS)zEpv!aӹf]9K>^$$DOy&%0&U{KLJ.F3ET<ǹS{l|S+Ze/99(Dtep9>(+ i\A;BnK'o_1*Ղ 4bM@b$sZ˳ SW%0 <i9 ʱz7'Hya$,`-([-1`Y A1N14B54OKBZpzz{2\xP|锘 _IG']rK^.X?!И&5xFxNAiS:$1 FV4V`{%ec'a^.8}F$fq,e"c|M;=MhAxCOhW\qٹxtf2=a h4Gθ˰j(`:Xte0_ J:HqC2qW%d8U~v M0~7p>|jh+Gn6@vҠS3}hJI7w f iOڙ=^~@ݭvUO>(3B|%EDCfj#{ƨ*{"7{XҼ}orV^2aN#ui@\ 3`e3) ɪOu(^-fvġܠ22q:%, %wJMyzn顆% YPJ+_곳Sk~zkyԑ. }izn] <ε <;dgB.8L>=3BhI{[%/Xp+`wD$(r/J pZE+<JHZRë'>VOh"Za+m^D 92/5'[h3o 1Ppә͹n2Zh=~(FuR9AA]+1!Jv@=(5;< cR{Pr F 1R=T5Dzp J'QC (zƇ y$xD\Ɲ*9[UlvRcypmQpv/s/ T!\BMNGҋ5x6\ROˀ&$I,ą!}u!s >Qh)B/W  :|8mgDDꈨYݏUz?>slӻ4vdo븯jŽ<\py@8:(u~RƭliܯA1IʴHtNz8|i^ĪDT=v+ CX6N/Rٗќ+`*FOsI^Ť \dńS}N_ Rpz`̥ })`͇SaqzO .-۽>'/{2mf + p lחc{~WDv{3׵1.L~sawjԜ|쾙 |xq1Q;ks.QR60o|H*ߒT|3$YY%-fw3EŴhwp+X<6+vhLV1.um4.s׉P͕W'˩Qj MsG&،*ʹ%&Vk ?7f&2yi?i涁^:ƣm.'Ԝ|ooS6q3,UMPf|{} *"BDHFdy,k/p,k!MQQ=oA`kbCo.a Etp#JW&)ZULhvίd}]ck;&^u.3λqR+j} {h|؂ 濮21Ͻ]FK_} wIBZ뼄R@a6ݟM@r3a`ݤ.}.z<6e}csyW֫^]sI)<6 `unzhH"\$a;M4&Cb)X{ڊz\T3G*\i#jXJy?Km|2-~bFp;Jgy4Z:a+@b%AV|py{h ôZur ~R<6hD(Eh:Ͳد7XM 6D]i%Tխ-T[ hsVmL09[جx]&p FѾQw&Jh KC yh1ڞ zxJhVݿ\JΟZ, @)!u/`h (:gfQrtĢ=RZA µ57}f-H[ NX z'I6FYE1.QsWS?ъKvE,rgj:X1$(ӪFa0VY1 ʧ맛nvM*\Jr|)NPx6e~\ݗK^S>/b˪ȥI]w_&Qz7' FՏY9/o_? Bhr<g!⶘O6[f6㴸]%8#˟e2y6oW}ҹ-o!\fl qb(@],n EÁ : ٩Iy"'jIְ|Ul;:ZpwL`ݙ2V0)ǀE c+9im\@`]K؝b:;3?~.w0 i9f ^ךЂ>d vd6stI,"E 2y9'{h,ǵSwVr-^ٗsCPĆG e#OzbxbRmPXxC+=rIXh$Ubs4re-S 7i }yh>< BR%y"3ARct4# CP4AxYҨ @y獶9$KZU`AOcxTԇ]ƎBE; [f6}š$cжs'OsTI(@mIWqLxֳz_:v/<@L(aBIFi"Ic(xLNP4Cct,r21&'E̓p\$؋h $e:}ҰAZJYǴ1E% IVĤ*_ /cA;V.2kIUoCp?P0笏ț5݇ΩQe48\Cڅẻ#V)rKPtնb(FU4ZAe@ݡѽxã@\~_vW=4}FCy]Q*@n*p3iAU qda$t6_~?0X4QJ޾׫}غɂwv~y̩C m: 6X@ܚ*$cO}_m |x3{=N0: 9򀡁d@wrz`-7㛢Ar}bNMd L1HP[˄a%Sb1nڬ7 \H7 _Z_zsvEGxLIwwfW ??eɽCoY1ʿ}~9W*Wܓ.0^q_LpSŒI% xf4xPʤx/n !CMvxb;47:@|8̕u-=yoJW<7F83ڲ^v"n_ROHSi!2 1MDn6,L}^wƇ-͌v7H PlQ&Ѣ.ɞ"r\]7xAx FL%FX2y\XJHg9a$N(C0TPcXb {xm>dk^1\섫cs>$:"gGX4H& ut47)G󄭇QQvob)A}=D?NDؕDsiY#۟_aVeDANk=Ac'\jQ;lgp83dU(̵h"<gUerCb%-aLE&F#8oC 94rt`RlԃOmĶhW6 Or+RIP,DVI4ɖ*U1u?0Khq+GW>$ ׽=4!R$Tt53QcK+ڒ9`Kx9l>[-PM՞4`1S"zٖ[:"![^9>;U*?I@>T>>ғYfrJ'KRN(&yh%ޒ~VTD,C%Mj߁pFٚ'5{`j۶t^f`d\AEW4|3=ftz۹tKM ]KI悎h;g9/OW?C.@cy7^cg"bg*Fn~DV܍*wQRW &S௿~{&`*}zX?g?һ?Ŵ4W.S咯NZxuC,,/{E[<D[_&Ȣ*s,Oy/Ot=8yy`41|X]V Ͽe(8+}UCѫ"OP3s+9D=Q$qr]}<3Nx ej/ѽ.ΫZ@⫡x ^NRh>޻tۅbu)n,6^r̖k MkZxauܭ۩KzȎQ2^u-X6ioY<͡۩H΀%z[Ujmh)zM:a{y(T%jJ{LIvQds/! yJ IJ i8ޱ覨v*K(jcR8=|QP="[-WӋs?eZ"|Z^DQѱ9ڸGf\4iG1_<^@ 8G1b* /:mHǷpxx$=?o vH Iڝ&J :b2Ŕ1Kt<NedvǧK*!NGLǡ^=݁u~`u#!8 BQ΀hMd, "28G 31N<&m6\ 7$^qo9 3A<5*"")Aq`ĜjeLlbQNΨ\uD}//H0'">,NiݛH$慸-6a)HZXAE7*Ͼ!'n^촕R'0 tQuӀU5AI(#SӠrkQV%ciL$[ (*u-*AElV6d%\(h.)v|/Bn_w4M2iI ]"kޚF6srNCXG6viR,,Ʊ$X8%}_i$ѐOrU@^u G8myOCOdMw=gVVn`97H]-LS $;pC H'X>-s.<}L\/dgi(i(OӻoB/Jd8/(1QᆏJ; mی`ć!-9m.!v}ץ44vZmE/ Аc K#Ư.Yp˚c 'Kr Fm{aK]HևS2`r[* MnjR$إc8O]٥~1WF[lݔZanuDS#ևP N3p0lIV$:lC9cdŏ)c& "%2g(BS[eXl-@IQt<+&y,< Dֈ^3z>o4A/Ym,4YXŁ[7%,XDgnjV! Sl/\(>nƼp' lWy]7٧pAm݆+34-~W2iP2<1"inVhk[YfSD]1zL闢#u@iU>~% xXy0řń! mMO,? NEZ7fqXЫ,h+bOCN!x%^'04gfZYfG1\{KAIr$yp%n6ڸENlQy# z xw|f$fɻ÷ODˮߛvU|aH5yDVRa۲$֩nhm.(QAǸ׏L,=Fɘ~=yyʄ\GS :X;+h| )MmLe"dŝZ^`^R@ }z×ȸ* gg+DS_;+U".dy7  .LdoϏyk)O4˲[  _jfdќ.q_-ɅBۮ6F4Ɨ[LeZLb ,[k8W&ULTpo8`(cb+8GIp"/KjH5~)xaǡzüɕȸqHڌ]!ǹ3zwhP9em+3t;s0)`s03!I e]d NUp<%j6C^zEmՇ:@e}dz!YzdvFʪ+^~eFB_<߫NO mTꡤIұX4`w(agL]Op Jdt@l.Oūꔲ<5V㡠g4OSmgDFKÔP!ӧ,2}pV3վM*TJR*quiI u9Y^"g yKŗ*!oLѠ 3vL|^WaM|e b^ W\%"Ҁꋢ|k+S&Jx& |D%2Z'PԇWU)N o*6tZ"^"v;S*gOVyJd!H<^eVWI 8&rV|܁Q5;sqb\}4<ߚioyMf}g-K$Y|ZGYjciX9l$ΐ$pgz2d`Z$r&cXFBg^6 [_p4Š``V$eZE?c}[4g;N eb-}X30H`<.:p>/'лOY<0z.o|bk岿բOh׳M3l5WQt͢=Z4{(&":Ұ*`(W? # >(fp6nŠE-1}:> '!i)ȯicFVw@0jrw'&(hSPw2)4 u|(131ծt5FS3v]}t^曻ǣ~oPmJPc,2U=0 bǚlWL-*ì<jWbg+%w$3+b]S/wFh_0.W&gr1^<.]}F7jW(6/l kPǡc4GOoab_\Mwgt ۠wxDDH}Xed(sx>aJd򨡄O n۾+T6"7=L*9wx1L4loO@8Hˮ хzX?Az(I$2 -3N27 ELDz~\! m:oiV}@am< ¬943GbovbF .Γ=.XBHd.+`@7 tK:MYtEjtI 5֕=q!*,Xa B1mz )]KQm< ]fKT!]zgJdH#%j`Eo}=Y%>n۸*A,86+#P?,ϛ?ܼXӘuَ`X FLj̰7q8o+E3\`a1g8˕Nc4#BZ唐Q1pbS&R3PS+C|5r\j:%2Z n1f U}n[y-(p}ªdK'$j^j>@BJ!sFq|uS(2Qrsĕ(a@21Buq{MR -Uߋg]`6F~B/5L;R+izAO$e(UgkaG0z i=ưoL)_懹 L ,)wZ+ZFJSRYge slU݃ didŁ[phFY(k܍feUa{&LV"Fk$fX1/VVb[^%4smKƣ0a{غPej Tm+.;9|9R":59c.Ž `j1&B^̷\ T>CyKd\-4r?k#^w3hΗ̢K_oX8Hި | %2nИ̤Dӧ YxEkPvq:wo-_8#<%2Z k+IW3J#,rܽ~F`,Sa{̞K/}c@p~D>D c"c)o}R7}Go3S1q=T"㍹j*܉>zi5aƂZR N)!*>Viќ[#B{QJ'v8+q"WhSǵY3'q쟫f>9n:B_Dy `&l z3j`T"LP5bBg ϟ~L8~'\>,hoG$0Lz}t%GE~ӵU\୅\χX5 1G;>g30ɾi`f,sbM{&=XͳGhzuXE.ZF/YԇwFyxl~SNJj'q]e R%4,^`w|ʊ^wYKyW]2π Sw.E|ь3/X,{do&KL+|f}]leL:+dWeyփ Aa:Lym.&)˳ QB-jnk5Wl@cPrh 5_# &B/oWBj"a'_)1F <`tAXLBz=ɥ`ާ)xqA |k9V8c&3$>sQ<$E]VNR|[ v/:)"Q]g]F<|o\3QFsD6O8eG'M=7%ܢ6Su3),{܇6|w(] -ZX$8'{o6IJ Ua#-GjH&eT`nxƆ]URb7ǧ6To~ДIqo( ~54ÓF%ɸu^^l ? 13u)<'WhotAXZO?ÿO/T~x+ߤE+rZ]*C AmMRt2$xDJ_ ZdL; wW}!A9bQRl[fwxqvJkﺍ7_3DxHV+vXDbJ6G.^z^סKĜMդ}8<m|>%۰6|&{ܲٯV-'o]koMjWdxw xxdO\2< oojZ/"Tk:b1l4y\k1Z|Rn^e}o|0U&F RQbY#קO4?]z۴(/Ev7R4OrV>nrgx9 #Q5|;&j/4I:wJ%X9Ψv^)@C ~a~b=<8/\f>-Ztb̥з/^x/TxmPnO9ǴH/W6E. 9~wU ؗ2ٹ` @'5$Dp"pyXW1{[4ލ'G{W}n$D֞ L_3'?")[Zi/?'Av7b8Dע_$;" }y'y/@G~EPv7xM;쯸qm 7]w4N;SVO[TBU]lPe45m> -25 JwiMES4iSԇYF3\x8ݐ6 A{8ƙwmo,:6S/nlƯGVGDwbL]>aΩN{(*Zaqb:a$*AgvT*V쳺Hmy tG\ϳSnɱ_i/OV2"VĞ͛yS6J9Ή8`.i$;afN4sO>wy,F$w ܋t CP }r<\]S]R0mo_:id|P0uĎ1{iH#aÃ~Kqn]Z 9켏llF[f32і٬eΩ鷴*adOhXIUUّRcQ~x"^ꭋ7>Os᫏v~8~ C?#lGl=$̔Ww[NIlZ8AG.[˽I" T[GEOڊj(2Ӂ>_h؆eUY)kT;\1W@ 9A_jv 5 a,,p8 H!L5}azgjYR"X⇕:"6ؼlf7>o:.fV T¥P~.u!ݠ`4SBR߸b!7Vy z[*JV9T#I1wg/9Zo* Y˰ú;;ϸ;%@~?uѐĜk~ZS>٬Za1&s cTfF6Jn'S޵ P9VB)4G2Ҹ] elg0,A;3T@qlQ̫Tf5MM=|9mC<>r1BՔgb'uMVM 6"ϭ'0Q@A"`T}B*i[\*حM )(~˱\99qx&sWJ 2\@uPRMd+\;@$%zQu^_mkyJT/^M9ye M.: =@@;&A09Jزed>%AC/^?߾ &Kl *u% UQvRѩ 2S R&H=fŢ(dAi =2:DRA\sϱM[֒yɑZMX5ϣ:,$0h {E֧m(*Z*q~8ϣů \Lԕ'k1(b$G($PSS`WsȊ;R:E]jڄ,㖽J14]R鴩(P `]2m#PcΞ5vI1Vp@26R^EK yu5ں2MwH&MBҀFtņh۰tUTYuWJX,R:Z]Td*y4TZ&jpW TT1#%J陕t@~jOneJl}DGSν5RҶ`9KB; ǜ|شMY.Ŝ糳5+*KS"緢:l܃%-ԛsuWD 麀P{X[i2tXaQn8ݹupa1Hp%V[4z(ɤEqxϑ*5gU6&EPE`KndGwVFka^tV4"tjxy3ws~|x=u\d.J$dUQJ 80’`+N!zKu682+SF\̓^\IRT,vXT~nW"$zn'k{٭X(A=LHxk͛Oy[%~.)[&;lgl(UbCt09덂Ƕ/Pl4 n)(#TcGm-rՍ{R6[Jv'}S,H\HYpUu,t & [7 I>O.z:A}|P8x 1؅D)W?m)Wxj8X(_;$s?t^ۋ5?Z֧:ρO?9N?8}q|__ ^nKEuaYxx~g^Ig%V;ƨTF5:vyףc:Nq&S"ؕ$7U):Fvf 8;˓88DUH`b)CiHx|s3ir]J1Z5xiJљQ4Z+/%P)&̻//**٧GM6tcz 뱡HloDF6$fN9N:7 Q3S[O&(0Y3ŲK$]JeрJG֕{pZk+{W`mRm PjܪmUZPh㔈t  -FQ>o-O䲋H%.}]&r- }H%8oKF*W:8}6ՆPX?'["4q]LF1|EJ7py$mPJ<Y+mI"\)5$Yie<+: 9[ X!|m&lRA2lWWn(X%݃hsuydi(5^r jZ"7E΃7WFG JW54_FѨj_sΩ CEHRJULTUWc) *KU `F)d{8x` K7?v]f2e-jˮ dVQZ#UuR5˱A/\V2SҚ?][oTI+-^W  $fH3By70 ~#/El>6 ]UDFF|_d\ըݽ^M?h>~-6G \ #8Sl_9d1B1C-h[;蘛Y9 T)%{ckaˁaB ⮸ qX7vڷ֘OӚ!|Ni6]6/DBfZcZkLg>cZ{\u6o1ʽ_mtGQ7\]Hr["Ĵur;noNOi Y򸲘rpXly}xy&(d 2o w::vqZw $arG_71[BF:_%R{>^yncӪ:RTSsHgt: 0Gٷs{ԽS~Т@;6z}}9+>6sl~<מ`{TC OqTSd>q< J(s(ssDZJ 5K?hv,[[1p7'3Nշ|^? Pۯ|x2QjE;|pPɣcIЇ{b/;Qu)kG r [fM*ty.<Ϯ3=>m/$uTIGzpd4><|z7lRp }$aWEV#E|ʓ'>z7[]]2'aߓ<gf߽?~>T'/WWFOB?bgY^;է&|d~~|̥M㟧AGw/o,уYp!?3*yg)Wx[Ө<{Ooo>٣SytS_v%;ĘG|3\@s623V0tT3\ӨŏZ݂PA%EhA}{ǘ~Т s|{fc_Oxږ,C_aɽ@$avy] ޿xƥ^ni|<!Z/_Cx6g gZ&F-ke0o4ueD{b |AJ!b5| [\cfIQ\dgˑ@VdK&i.9ԀjNVhB Ptv?9CDT="h#a&nvW>BfL(ΖÌ 6='ˊs_G}n}0]qs_?rQb1J9]0t2x&a|7fvmd%䜪1ڨU}Myρx{!<R)6!Fky4n4ED{wċ:8_ԉ#U+w BOu'iU){9 WxovQ! sciȎ)7pb ߷?^6owtr"!N6 OsL9C$w6GI "ݮud?{TĖ(<=`b߱-D)[FE֨ȵOנo~Q2 io& 9(>+NyZe d\Wh}ؼC78j< "E(Hͷ޳K7&6nM^ /03\xW˿7)e2[ χl %-)POgvٯ70H=Dԡky}ELAdSL1YDf=}#t_}oL@s;_TuB6k(*n/|4̖73+s,ԪD_l/Ҩa/sE ]q3 ҴPr_9R0'P.jőU^FS11Og9 S\=ꍯC6}ǸH <$Ysbݨ%2>h&~k6-uI d-"ܵ|p<4HJU\tiyH4,a3K:H)FcI12-y^[K7( voDP`=N%L}!chbT$)%s$(ћ5YI ;'?_VJq?˻Hlʵ:Rrц9q=̴ASong4{BZIrN9g[oو%W栚xe 7{&`|~/06E5i$Mgbx@"g{{wMk6Y;oaͳ2׸*װd@/ÌqQHIsjpAFsK+#犚_sY7l㚥f5{+M*]6.xU_Y:`ZF>hO%GQzxcL!¡>8kE54.Fe̞!>+l{$wSzX% V2C%(C 2׍Iw82}{=]N*~pɗ\n::n_a[XZs`Nex:갲Y i]hݤ#)U]lm*QeĆȋW)1MFe|-(7j.+Y9fOu_&-cl-k5ǠZTfEB.8GVDIK5$P%cp5T.jM]zHNƔ^#.I ͻ&ak~p7#0ˎQB;ӈjN)^5kIё߬:$WL c@q#}~M xFZ8G҄Ƃ5:BD5$QUΕAHh_5'Ň7bA̞ACH\yQá~latl~/Wi #!WZ>ah& `Nm*O[2hHWT=My޷$(|yj$&a(l$*e7-b6PW{J"E6sRobB=hB0uKR4x&fgԭX3u&_ד`x X&H½}ei`ٶU c!GiZE0h3 "{4-j ~ _Aίe0zZ;rKn`4ZOXh7!hvG?ognN5G'O<'xd$U>mէz|$}?Mѿ=D|Fpg]B]ط[m 6)WNFIf+G4h)?zN b.۳jǮB3o,s:1y02gwYh+Vh~jP6-v?t9Vi3":X 'i<+f=_Hr{ʜѤ.+yOuf#Xg~nl75Y1>ؕRI-Q-z- eJfyPJnj*-`~F b&M$|'n6wj֞_t ~{⧆I+8֗k5!s{MxF{۴w4 ւa_{qnD$j}2Obf*-+c2}^b%z1mb΍NuJ๚F>)W"&L'fӇ&J\Sjz>,^3DA<."B/tYK${I[ 8t21> O>CCFn{_{p3B?;ִ_#܃ؾHr#8FO" V-ID$TK$’Do0|IE<׽ew&`! a h@95>ƾ_dZ?THPµ7F DbC飶dAWK2QeʛhZ`6@qv=h"D(ӷ[r&sEqyYRs[\oX~1|<ٌAVIuZ%J8/֒w¡*\&,60g' 2-%a_ޕ$l%Ik*IЛ=%k 31$A K S"֐YMj%ʈ˝Ls0ʷ#nZd\TP'REr9"YB35s<CQMԜ}UTJ;M:hbIVWR)HZNUPI}l{!KأQ0Kc9[f+^<񦗬 R&@/Y.Zd !ĕMڱBHMInH)\'Cʵ왡^j^ KIW>v,骻͋TĚm洠dl՘{) g✝@?/N~ۜ {?ԫ]PZAˢI]|. Z{d[?C8-lɏ!Fkf9QB@?0 ۸ $\8)'l7+_s<{ j) 92KMdNPc(r7an$ӈȦP,MV_nuGgʧv+[ݜpaa~Z|GR &j۴ 9kXt/t6Z)`SO6MS%:E~Et!Zpv '[k߹Mlf_Vg{+ɑϴ{EoXva0lFcn%l,!](2/0^m 7+<aJ!Ї>v}ЇG1^߬g/ .J~7Hħy&cPp+L)X{8C< -dY ޛ=!B0|Yzd~H%#O+Ƿ${B^(Yav },}SqM`/jh7ߴ}NEA;y$pY0L^˃v40?dKOd4'4 0=_:?@v᷼4>G5>Ř:'~ǫ~kSg"7|Z}hP2r}kpkso!X y|ZHAz9\~!k`4D9Lz!/PX=@NSmMŜeӧiYI3 8ȌfhiiYck% vs d4/zY-J5A  rj>U/ dcfC:or`y *jmcUQ-7Tk.N(,zMG{`!o] <O<X}X׈{/De:Lm V>] M߼g%t{8۫ֆ*λ^ĘL +I! |?u&l.~ԯ^}9w*/5~ۇn붗ۑ+wG6hh^|??ڎw`qI=G>M_;9 IbmKŊF@SV|8UKk^B( 诓WDlVce,:D0W_B^TR;ϭZrAHwwGPܧZ^OΏZz}TrH| nt9?~%2,O:yw.e/PuxZ9ݬ&J6ekkR[~->o )0J8%i=.Ѻs %fe-jw+Zi9>NI{?W,0i طKIdsmMڥxW| ?`{tD?اǫn9>qUE;PY0SPU9$`M*mr胳B !\"h?81=1\3z4 ,W(34EtMn~%7+?(^Wo&d?fڸ~}"Jt&Y]}rMDmp]3֌T4P<0Ї9fկ7r׳JPۭRůTM1Q^KL ZdqPuHVkHYD&ɋ Y$"s(e6YJ!țdj LX?>S+SNT__lݍp"6r4((C7bH o,Q:<>pw(D Eي:{+&^FO$jIZL]`I>b=< voG+HĔV^Zu̓u/ MنS^Vťi)twa{]K&{mHiyq5f8WZ*ev3&Y['!D*qp &OJdlTkH9%sET"C"Zޠ (JE_<~9xQㇳ?Z}^|Q$pHbg`+sY/x}'ENZz>nX#j9cIYqW* ~=OJ"C]V9RmssXX (d@αFZ?}݈Vޕӳm!֡^Ȧ#;UV^lfU@l8h-TfTcI XT 4q/K Agjud|H#]+f ڸR1i5<y|*`>9<?3u|7=e*^W7:zO.͋[=Ӽy)Uz 1 wfۭ6&oIW|eS/WCK?)yM\{?VvnwAf\UdJӄ5 W)CQg7ArrA uc }*t!y\lBB6W uflbbl>֢ZɝETJ^(965eoI+0C^%Sd z$.“]ܖ3UgdTKnQh:PZcT}T,x-G鿕Enf[HYʶ+-\"1&q3x=("`7u~տ\@R @ "✹AxSB6ChZMmU"H-'p9blDdDىqEO+ 0O2`诗 ?hW3kv@ \ȕP'hQg[L@loXdIL4 4 8$ y.ZB}KTt|8`M%[j_|C& t[~>l-CsRR$ߥkn.y6Ei}1Ґ{[o&V۟iЀ3Hiͩ߭\րX}uyQ-.~j]qLׂYP"Rz>c[(h6xhWeG>y#d%%(ۄD۪RJslMO^RVmC$KQ-k6 GMu(pE.A nVGCӥRM hzD@Baw5Y4:}Bv֝ ?/N~TgO b ٠)m6J0[/(wBZPkET-a-a2^I.^Z(lڧ 2l4bd#@Mj!+&nb%b} u'7PtE 89YY>: u[6wÎK (?hC[A&<9mm$_1;SIݫytg/> ЃA5viɉ̿)iIQeARdS~kX.fG*/xElV|[BYW2Iڏ'Iʶ1gߊ,pcu}@V^Lj+cz4BʺyE@rT^TTJEBsHC b)ԌF{ W\Z r_gpyjJ`A6k8{>!>^dqWAh6"S,@uWdJ{cZ:e|zƬR(7K^ll{[ihqC젻5%% 7ѳJpRj4m׃L0fb*IJdxh6͞G ߭hy>&U,iJJ?,9NpN 2:"#֛^]d=2\ю m.h( V%CƟT? ?e+귉ќ"j/&&R@ PbgPF+{ufvJ& BzMAk E Ç/ Zԋ -RYdl=|=-p~g"m4ޭhE+h4zK Sŝك|)c/\W%fb٠OK.UR:kK^I.NqQkٖ!Q, 1 @_0jsȑ(ybsT0a=]A/HA(=e ƍU/ bڋpy|ޭ41@-ObΧҧ,}83v_s #p ҋqǾIhsÔ Ђe"͉r.Cr1e@W-+ [KMaN v%1b7S bPǦ=̈󛙇 {7C֝KCr[-aZbb=HMnu|uR]yaݝ K^MJ%jZ.%Ryȵkkzg08$S|.75t'rUPiWZ[ Wl򊵱7N%< _ݿ/*s$\Zv).lz. ^K-mh Ry9Lr}KĎuN@(uZTw9a.z돮+oem1,<؎6Q"+bgs&|Шdw]ֆ[+-|N';U>eKYz{72GTSP-M7+^ 5Cfh -jP]#,FP9)zr4퉣 l >9**Hm&c)j47ńf{0]@yv)*WhYM#4p=T z9Syn Zl.Bɔ~ᦴRzH4mfMqdJOt7C1QKcʽucUC< ZR'JJ:RfTjחw3'u(%׽zyxik#:BhTЬlP>#UdAh6޵zBm#0\]w (ҙ,B3yD5Q ,^f=o1ӡ: [t7 c*['BcZ M3i Av@X0#v;%wx]QzTaBgif1Cx+P9?K53| Ų Q%2%4 d`8 Idh,X˹2YJrd1LZu<:Y ΊKGcoT DMvAg)neu w7&ȝ7X!2POE T97K^beXd5ADaU؞̾Ѳ G)+t-'G}ٯ*~fdP5&?Y7eQ#S!X]=ͦr?T2\!+HJx -#KY;[AF_Ok/_#[Gӌ:J _a jcp !F$Dj9 m W,Bl ;FRӷW}_?^:x v|MϽ>^~/<_OS4l8/q[>.H?߀_ܥO(K1Q''ETpa5`(󲔢"La_#EǤE`="X6V_d dzsb}< YFkh >kGY̼M]; TŐHN Rǁ!}6 ]mPV3,;?ܴ[3Y\ lcQg<%^I ,( :ދO9(0dtAWΚR&&59jϭJn5I=ybbcRQ![2QzO,=@Ogp=Ay Չ{{Q~'w+e7s"EdL x(%u CI}hrI&? #Ҁ31VJjOiB%dp4(I쮄@n kekB[gi++[T^Ќgnvt.7S}Q$@; [HJx7p/[Waв1]E;4fR1oc!ke!*%J}4 2 O ><}X!3MhZ@U ZGi m3E)2ڠFD;c[;sIĆ+ZSĦA0iXj`B6M3&hKӸ/oaIɇ%%|ẊA:Fj׼eDZcAI"^ TƒWRĶH 8~2mrqG.\[ҿ-cl& *! dAybJ**'V AYUX56`fՖ;\>S霋 -ZφjGСeFZ;o?_AWSqU&˨M$zW{;84R^ysܹPyr)&,>ERTCL#3C/ ss> L[jՋ6 =JX&'r*O(SH[l)<>!ͥ-CrIjDj=^VJrE`{)΍9Ң/=\!ph逷f5GL)\3 -*Gf: \+L$rwE_B;G3:$QG`l$`Q44;NA@ eD!wEcb s7 `(?.}߶XSRK)׋Տ`. o}0rOBcڢ'uBPD5k[9#h :|v$rF¿_P~<5Gp&@l^Sm,-GF7sldT~ bE ͨ_;+xo\*ɹb{sz9Wl]Tsb-0sڡs/CBOݽskeCz.ܝ{| 5w8u+lQHwТ^jBhKfI-i,nRGcU,f-k$4 ŸWo-y|go~M%Q_|A)FC(g54}6߁BnJw4lWCuVC-xA('':SIJ֩\:E:qe.IUF&JiAYnh~}Q^7?Fvqwb;⾾U9I#Ld@TɽI4/%d;Snꌔ2|AT> W'l6Ogki S"@сF4f)1p) G;CH#XmH\;ÒUе[O{NEvZL!D4}@ Keڲ/ ZSBSg44L4jF[%wI;;VYmg :/Qħ3xy82fqUs;2o~MITGn?)zlutGGKՊKϷ]!wfGFTJ_^grV ݉&&T!mKS_,p rfnU5vscXlRKlU0Ier~#rˬm0sx~LYt׋Ok8CYŝK@fB&Ag<gcG Jֺdf$v9bGQmşZ dڃ+.5[M:|$(!h%>bZI{:NJD{+c,>m}N-e|uu{x9;`9f^7\@XI1BHyT_33"WʳǗ=P}K@mMGʿEIQTog#,Zшk>D̹7Mޣ<qk+{B.GU:=BP9m/M(%4C`gJ`--S֯z \]o_p8M+6~+1hXh_a)?LCy!Ðڀn4efy"76H~m9X;9h`^2z;' fxss27M ͆ԃs9+@tZ^Xn&>=e&/lvu}b7~ y\6U;lZ,MY{ 𬾮gEgo3Jx3-C[>%,=c٣9@ĴH֓*̷w@ٜvPɒ4K;Hkoo J k59?o$ŁܴK[(_O!sUR#pl9a,ZTI*#mVP--h44l몠P~?lyuP"T#F EuB\ƈ{;\W: b*lGDy&,.uisyNIo++w:^NYQ5G մ꼧c=RJ=TLp|(ltS2O8 JW>;cJXBr29APFb.hjXɰSv%JWt=͊lx;;-;%Y;g;w|ӣ]0><O:R( BE Sk:\X%W+udZ>(_%lUӪ| cńFllz]3PLb 4\ _S846~= 뵎,F4%d.J-ZWe_DVż;_+xc &qjz'_mQ.Z&_3G4 ;D ?߼uE S ͂$ ݲ$x!FGvDÕihSVJ`u!&FO:+~Uhl(ií#V4*LLwi&IHshys6MV(tQж[ͮ:=^tx_Ţ?~?쳍_ .)zOP59x=㳠jNT6Ͳ4z1|UTݧ߬s>c"'ScU7yimU+HI1T/8E6̷Pk>ӍjŞ5iBD6@X6R8Eٵ5v{$1L7I2Yď4,,XA}SMM<8Ö_u]М@8NewGB8Xc1%>Fq94%R"r͢[Q5GdcCQy $FQ67v+dȎ* B:7`*hņ[ahF6)jcd7'ϢQBuJa6(22X]7p3&t]{ s᩠Yר$&@wZ'))Rf~,ZpMtx^!{6}Φp(2'<y‘m2&\hS2ԶtOu5s6憸S<̃7-LApX<1i"`->_.]Vk>W]CPوrlf/`8 'N:DO s^]|8eʼn.Nt~ !rg)bxs>~z+'b6%'W]|r3coh2F?* k԰͵bJA۠᣷{c_zrzJÍh^ GԿt6qxW98Wq4TupFB!BՊg,tX0' yО5]1guWz C<fXΜK!χ@@zfJa?px5NCl.nOZ̝Kx~ +uL nm 6L֨9z*;׋u?,.̮{v(7@=.QXb#?ʊjVJgbM| yQ/Yw`;'bqPX;>dS_-Y6&]hSp~+m5Xs]'a,Iu lQSExM+=q8kT;rc#4a>giAl5ru{, bъg2F-aHa`&}`"ﵶ%SB1(7KؽDv0=8D/psݔ+ C ZYRudY-ժOt뻑wyϗXR[ C"*~ h! !]eۓ!KlnL^A"%6ׯVLLmQM'n-F>M*v'qIgIg[kqNs 1 R/.o[Šd؎El eFt3TȊ=;Gγ1"B, W}6;mUEiq5Ʊ%Ӑpk %1QEPqY+[t:| iPHc Fig^]v*q@/ӷhJkc\oX+ .V5?=fLZdSmTI7[rınmNy^nye[ߜV2cVa#uHd!JxGN>C/`GTx ^33:$}ΒE(u^+P:,ie T[vڧXQxIS6WƠsgK;cIn[/>{V(@96(]Uض{U8uq>*]OwIW03պm+M7Tp\ۺب,w-]MQ rw;A3"x3Ӯn/T-z[k 1zQ+oD̉[˪LyhJǠzD2qf_@Te낚<N80#0T7xbn/{ ͗`Xnuh㬷Byc޽3'a?GW!V*'qr3oK+f+_ {꺮{;c&~D}"Q =Az<rB-!G}NXc~3H~l#Ⱦ%dUw|М؅*x-hUBue&$MCZ>:/6F\O CLcp&IHh:jR'tCDDŽif(ٻ6$ػHvAdC 6Jԑ}g(jAT4SdOOU(j HS@jVhJxu7a ѰhsA8q-Oy)ОpH R"ځJXu 8 Y(+ mJ΁"4m"{ ;!H&*CEX>(Qzc4 MIMD\3P el(>( 6>գϨ~Tjt\}EVE<mM3Eb!eNE[A'`Pj5 S JbN 2Kʃ)"L5چ4jQi3h& 2"P\\4 A$"pG${Z(39˚UQvTcB1qGqH I opNA] >NU"uP{\ LJEK)Q&U9/r1gRŒ` =L ܵG=5;'2E6®KH/S$):%92ENCq[Cv?\8:\骰!f K)Q {q y>j7k#!pUxfyU+{ ⍾ObN8gp$e䈤x#$Ph5k7"5-chRMOiZ4E-j]gvbZݧ6.~?߁^!|o_jB&{D뭖I}ר12|Gߖgw{fQ7|Ė)GJ#@i2Oȱ!Lν swG_qB]r؅ʍ}> I 3؛saNZo;h휪a䪖4t.ߴ!iF"]69jI m+vu-?8 {x^ܣ"!yo$z80^oH g5"ٶ [0h9`J:כ`^r Q%6LA=xw7* :-F/7[-lge-<Ι;YNI YXfE>@6򀹋@vij~)19%/( iJȰYUNδ {ͳ>^\o:i1X W|(@='KY纭`L6^ᵟq)'u"WkDEJԞٜ|+[`xojdTjePdA;aTu6..g8T5f7hv;ٛ 쐌11B.'%Ġ ~֧s)v9hv'!94 6`:`seSFCAߌ jU;C39vg NgAC焀у4daA`sQԀfsPB 3RZ7FTnPj= `UkzTE"%r cPUA;/X02VFMA j퀭oD.WG!| A^ _wAKқggrZ'VQ@b7C]Fu @0U´N)ZQUҼ5o%&'~6OqI.N0BvQc% {uYEq{yqTF }%$1{ Tpg2,y`Jz<+⠙4) XpY{pUz#~ÚPS!I]^3L<ℇ|2#Ez6^|% *t|ÞkNB4z/6/{p.g/?N&;+ oSjcIy2 88/ڰ1N#.v)HRedI7G<\T B<=u-n>lXS{Ț e5dd- +yǵ|LdR uƕB疭G|)=o涣TCĪkgU2Vٖb0v||3!/g&vr 5}f'pز@JQȇ@ʃ7MqϜ a 0\IeEjɐwDtN9{4 JVU'6ץ~PFfu3gѶů-CGYbeUň}P@a' «yռլĻOT+#*-:gr)< yns)GV`bW2%I)r*d c`a29O"6'z]̓X ٪:\Feubzý*W"|I{'6ٽsU^SdCdTJ9eEyefQ$uMg7g0 Ga??5KҊ2qCkOmg CG):j.&u"W7)NXNF,hk*@,Jڰp~2uvR麟/4 Ẃ Q<xCsnQQa%形nۭC.12G.*X Hv}# lDJjPI:j`Gcr"EA߷@xzk8})u8y.!DTpRaA.78Z4o,*^3#=DkTh *hЊA`J遢8%y|Zf!^ |ys8KnYRUE?0/ȨTJ.O|>A|c>ΐmIE?`Y:e.:ز?O ۈ NfW'z6n4^5 IV~6Ea30v],ErZR zݰS @u1?w:[1C;T{ P˜| ;; qՊ;}>-^C%21\`yB9Vph U9[߆ & "饫"pd…\!借aИD,aC%:GhK `Ǥ:Єxa_Vz~8 5-ptnsQZ%gn1x(]>\/ʨ3~>bgT0h@?Ր~*ߡyfr ȒsX9;^JQ.~[]y (@Z8>Ǩ~1qLup;q\@-FjL;G{v=&'# 5*$Q%ˌI*&R2A4Œ&4Yoׄ :{{%,:MXr 4n1͌ O'"LLmDRBӯo:9ah~zabb3,DXDSW xamkL\7jahfaх x\!, r3Í~0[r8J~H.]$M; =ƪy4@VZ+}nX K k)]>ڊ]*`0CIY< H96 _=]MKxf\ KZubZVӍF]7)A`>:q;sYm/ W2tM}FsWlKd JB$i@L'\&lq;"rfOO+h3.Iێo%.UmI,L5\)]"#"yf`k Zä5LYÕ@J09iF+6~Ց!Lh#Z,%<˻D ֮aWG~ TIڰv8/kV#CRFV-pi| b%'vòM]d7J|@JGT[QFֆqA;?\(!h|Y[V{aWGf4RvY.bM]d6J|[㩌.R^î f2P>` a"lPpϡ^K~Q:A%>`ĮO1?Ȑh&C Uヶ>q;鉝"Ji#C F2T4kGJ)OTڵ#~Ց!LT# VJiO\'vudH7!H*C;Zî fqBQPE>h2v8/Q%,6 ]&+-mcMèEk {`k!QĚk4v+~ˁT:\iP83PJV&IϖW$k[ҕs4z:|1qRkF&[ d]g}E'Tdž `m&䜃@+LND|K>_NZx:Xu& l81S1[IpkbVr_u\ U\,?_9;I6t-ώS"Yr2…q9Ssy쬰؎ |7k'Ο?hO9pW`+^É;V|Ԉ0WKŪNͺ`Oҁ=)\t%}>aPIOtCn0,4T:~}0_Ы;֒NR;>pyUv6&dtx0뙍WfgwOMwo^RYo(@>/0q}?^U0%F ;45v@.)0حa:p/_/Eנ[sN0~3_?yi_.³do_]&﬙fcPߋ&7:W`s6-vҟ ?e&F.divs|z3W8(w6}>'f~\///  aC3͢<, :vOuUkx wI'ׯ i·ES۷W1#oW''ao𙍋՛ ׏q1# "ҕ,hn4~Ω1G]Yū&x\pW_߻ț~&'MPJr`f4vI:ūl0׾￟Ef,~37P>p Rb9~ }fat2::zE߿OcGWzܝvfgEΊ6i\˾E:b,ЉF0 ˆE422 (gkh$A>Z*xE Iw}K6ԥ&}kf_PaF?o E—/qu>;(Q;wPT:ǥkc}xw9yd0GI~aS90ߪ|ѫ=ߓI~YQQ4դ(X!m栶D0Pi"q$ƊID/AR 'cɥ{X*M7)u0(Y+ ~$P Ro9\+^sEKgѶ}$ЈsӰB7!%&g+-L/oĉU߽XS ]w#O*.vEDigTl%Jꑷi?]v}]]ve߸~[+%۰nB8mE2 8 aJkF{IR[Ѿ4$72O逋ԧ %k4VN6V"[+IfZZn0!i"`$qLhs/˄D!M[g-2眥,]))]~t*GQQBȣ򧅒JBi_GU. Tb #m֯zB~U~U_UUeLMAFDƨ6I_D& 0ۍ?5ԑŧS!{% QxsZ+I*pg\' %1eزE!1haÍВ=WX4L/"; 1Po)q!11ٙ|i.iiy"Ũ~,L)~~f扐N]"S%B:-'B*sN1M ВPS%c;,R[ k-`/6}Q#Nmڜ*$j}($JoOk ,3QC3vJ Z/f|S{G/M1EQn87n'& JK)$pɗ[p)E X|{Y*h+mM#nҔ(oRMTZ"<4fPja@!8a1 UMbI* -!*;0螥B2rv9(-Qc4+*sNmX? JRGvQBV,ӽE 8yg0Xf{θ+P٢e4: K)`Ǡx-fq Y%B&"!$o@hnFG5~.M1C%aJ*0 8Hء*,LB 51 I, 4I rD E*@!eJCYp "Z)^TsiNe6AYz`bM2-AY #pXrVk0bT' M`oLѳקh\˼D*DƋYN<-rN'Gɛ,qՂ,Zy=2`6Eiיf!"Eο^`Bx2#( BO~82}p{7୾DR^všuD 1_| p}B2 S6uĺH+gj=1@ZS΅/2ѭ{ ࣬8AGJnut%\;fѷxŅ)F¼ QSJa$+uC[d=ѥXKvS8qYzjC^K yW{_lg{(s$ y4ӑf$? & hs J7|S}ck?eԸUQ?l\Y/[&ϐI.SHC>.-\~HٷfO^\2ܭZY4ѧRGQ'bJ*=;/H3ѣ36Qv1Bus;u,vXԱiʱ!Dbsl8eKTsI:H 05 [RF#MlFM_^G1Vv.`Ll|RRh q8(i6̎F_*ɂ}3جx k %_6nfsP~6˻Y:\f{a - k{^k ⓨW~z"A;?EoQ}-[GW9wH"yT7;pݣrR 0ꅝ14JI 뚻MƠRDK~l _sEA&}?;kDX6]I(]]*>>^q U.?JH]f_&p ~p܀9gUI rե*0)ffoEGW &؇P=n}ܝ"БDGV#c)$ 1BT SBQ eD"4ae!b2b>Dׅ _D>T7R0_k9)X?paFA2iy z'}x"J 9y aY0((&Ē$a0-m# JdH,`G6P C\/460b}`ti޶ 55YUb4Q jZ!RNSAfBVl1(6edckXK^_(/p$p<8D"rc#O;;힞%zf꺄HC~J`nKgm+fMH3{r]'t뾭ֶr+n󗭏 Ӎ߹UAR/>a}vC ۟@e?n#MjyWq{4[4VIJB>s1|,ZSqTMr.dtˌҭ֩DhL׀ʔnCpw΢UgӓS} PT#\;<}nr*N`9>gar ,Ի}VW !/!k*9]`|5ߣd??W|Ywܳ,M0ɋtupy$^ms-BR*wv7~ ,|2ܩXK0eDe?v(%)20021Rc0c-Udt(EDA'HQ@KtZ46KⷧZM%+FEc^y&\cj(ET pڋqA@S###?k8F4p-TF Ȇg( fh0F*(4b|d"WGGZ)A#Pdlj'i[s! B'] V%H&lԳpF92!LԊ(AgsP̊11{lyQI Qcȯi)4՚va:Q)I'9ZsDȀ;S,X!3%DEω5K{(0cY<`MRa6d<%@m?ICa%coRkW Ȕ /OXim[-OL@2)FkugNk5v;# qd-l6kSC@%oUʉS#CuޯCT=A,#lrI$80Ł!90HORxl p'֣giQdOCf8*Q Pt B?Te 5Vj ۍ!\G.X (@)sr Ù-Kՙ|טKq GV1wM;8TFX_~w1 d娓 =1NYl"Njq Tmpw& `$Gj,$]vf hFٿK$ish3@8Pd5?4i֊ Wp +X]q1 .g+*mC}F=ADM]ILF "ԿYi8(YRFthyf5IT h5V4@$ _'5֒іORESޡ"Q Ò&X$SL#v95ض}sm n%<36EszX@MhN6sZuj28} X =J0ka@~aV"JG'R"`Xǃ4䀣׆ pEJZJ]"؍YwdV2Al >*J X1f>Iig7) 6؍x?<" !^Mc!h-=q88̃i "m̈́HEլ^-'Xv$rĤN1HG=CV{tNt%>=[wO o:$nۼdL.!m~,_{ StnY 0wcvFwH#bNDAP.H_FkB˱-%1 #QE=wlm0X9^{2}pޢlg=%0.cid#G7tʩfZI@!Vyl6[ k *K8iGo*=mG#)&9gOSAOn׷tL@_R>S&|]WISWe6Ygy߯HmڶyгSls˻ϛqW vg2Uj}^l{cRm+{X2=̀gZvs+d`Hw@N4))S'?3 Z|<4UP.o^.'PŒ4洵s"RWOX5EMfϣ;.|YDPEG89E:㰌pe 9#5{ 2?6qzs$ қT4IYfo/DJfPj:[Hn{teO%ݷi`K3 qb|iE*Gk.FXp&^%fQ[ %ӍҤz*$GƗJäKnJ2A@ !|NYH^9kD3;;&՘)ſA وs숃:+ !2PiH˭eQÔ 50t̜=He rB pN ;5@5IR,$g)B85>iW ;B"0ϼAda2)S Oj> e*[kW߀H['3㵤t֮Bթ:+^ub@ Hh: C"7aB gTp{EVzD*BXaCrG*1rc#qoKxc Xfzy!uN+9`1so4:=~i rPKjϔsK{BK5(.ModMlʨEX^3,v6ZঠT9zM˕^% `aR͕0i}5L*(O=T&+&MGő&#aR_)?@Rݦ&Mjr[ >qi]bsj1%?%<^)]S̷E#T|3C9]K$ӆBG)%jk>f,l)ȵ!ZOuiWO>䒤뮼e(F;pPq=q=z̜.=p{pj6~v|; 1 9 ۊjG6OMOYO-g*,1Gt->U{zrVnP|t8}4vB`4 ]G/՜Ws wtw[!Jf(Tśy)5,53D֥B &z+,Jԏ=E)|^f=lMY{g.Soծ΢yq&+8cLVS܏f5D5  %hՈ[,"#c6FPpȖQX` +F0s*5IU1k B-C2fAiJ*^Ň/W+^nmn.~=wTcjOK1Q]xFf>s [Sc<=-!^qϡ:ir4|,ZSΥSR[-9S;Ft\2MGűbGy;7]_ V TMJXYOP%TϘ9̹SVRHG(EDK&&'.ju|ֲm Hxf|.׈oEl6A{\I.}uߜ+olۧSjA'vW!ӂqQ|b9cB2_8+[R Z4 4 O4W7wGC-Lv}u 61 9GxJsWäYsd1梩Bt/FXy1JMui9_=gspyESGO.%>o6Z4 x$0Oz.qJyp7dh#C{Bv>AVhJyxz+toe%D":OdC-\Xٝ)'d<r(gDQK 16g:)[SFRZBKKa6sY"MFc ~yl&A2, 3d={>ȯRxl -3{vXYs%#wOONi _FQ&{1'4U-җYdYy&gB| /{ "nBSZ!M铛|R8?Og4= fTF?zo hBS#*([ -3v,ber2ȻyނtnBSSZ fC{.=RoT8%ԍFDc^MQ6*Jd_q)ZJ֘QDGsSHꉳ@GňѴ<5>WfPQ5T3@# 9GCKbN(S^ ׊2|(\Z#"HrnU׽[Ytv+vSjq/U*0dF$A`54gZ۸҇ͭ(`.e;9nJ`lV$RGRN.R$8C̃eK&9/ i2GR$MKHfLH``hB25k9T)Ag\欹2-AciѢrPkI BBp)Cʿ>q܂c#' N\u`l:kɁ֛_[8X˯FPHPag/6-t0m{w}kF h O?f(w*3Sc LfȾNC[2?gcg˒0@x4u!ti٫7jtvto-H@U{PH d-r۰ P,Ų)M[pyB ڇ0 ېpīxT iO4cl͚ƒa4N6,2X*eUm|U):(Җk%0^yDn(T; I؆PԺ?p)QޤBåbEt*,1~4M6,\Vjj0k!@pI~fi(EwQEâo=^^+ m ,MkXUgiTT5U H*t^W%<VU 0C l*+Pr'$QӺ0`45-gSBA+xl]&룩iyHaM;A=A*SNFl=1H;o^=:1ṿKnѳF*`)Louv}g#G)8{,?-`g 8 6JP@uC_;\gԾ7w1y=JajQd<\IH3^}w)ᭉͪjl \vCkN?Nw7F﬍͚lT5E-I *NTLv)B2/n2>]ӓ |p\F973!e0T✧0o(0pOŝS'Eupbh_ٺ/NY{ݩAJlٖp}j0R J;BV56j܁Ah?-[`,&hybxrX忊ۭO 1VQ@\+shRZE3܁>C[$^\bVafTmf ( A7K ޿+Qֽq)gaډDR|`nJ.87]v<)%M.SW)x.إ*uU FB%jR#.VzKUP^7 =A벜]r$R /=Gq+n!pєv WFE^W<ⵛ݋[5!a4Jiu*!2Ⱥ`0@ѥc`5勺Ue1xu3<_7]**xpB&Ժ%t5u]EMpTU`Qc{,] jSFװnc8 NUu6 }oM>ڰũު[Xa3*Xz17κ ^*{}h;P/ׁ㧧bz3nO=A^>k/^pC]N>RS#rRZH4+(YL&)w+aSNy1@{ bXF^h oуoH(()y^__}ۯ XmBowq-U;;h4 j羂 2) 5(4$RT`eP,BK ps)J 匬1-q_߾<[t:ޕ#CSX5 VɅC{1yC>ww>׾W 7|!b^z?nzE)S#yႳi~gZ[:#>emJzCgg`-$ x3=,ܘh8}W;z?͹-AXlFc>;; seXV)O< :lmx;B {ޡre!zh >q+*3sIJ%4N`q_/#?u|@a.v=fcE"V;MQj)ٔ&[*o|w 奔2.aYNM(P(K3&v%T *.9{wy 7FyYG"z*ғg;<"Z&6$s S@Xp GTiKI$R$Q'p4u~yXG&ؙ,I 1wgd"6uXP*I J/i)ԣrO%)MR'R45ai a'ᧀ`DgJkNFx8M{0D(,S X%b:(,NA!XUǘkJ:/fZl4= .&⑐ f \(>JSjRWmN@Ҩmf (ЩE9)nb5GI`8*OCmf(XiPQ`r5{yukɟG'U5x#DT.FV"]:/{_P0s{W' 3 6J$&;CK7,Go{|"$ ʍqß' 38kYbxB+)uW|uDD 6ʙ~H(Dlb&xWjyU nWMvRtu^[&=ad/J珔hPگ [$bRhh]:SW][)ή*u`D"Q/aَv]T,t!l*SW0,_%F2D<^:47QuP p9;rvRiv?uC㏉wrԦ:Maould7ykO/G`߻t>O.s7A<6~Ͻqާ_z2/xxI_/LXJƈ?ݸ{;`RM-A{!e0 @gAtJǓ3ΨH-)9, L* ˔fj%1yPJʹd27`ndw0}%kƷw`GbZ q>Jv@!xQ̬93ۘQBxn1 H=6f el yUc%3Vm`qYgZ6VXsy`T{jYK&(f}Y+-􁼎>} R ^>k+|ج%D'L`*2J)63pÊqluI8Y1k dG|5kL%a^jQ1o{l蟻ZXg~ao'wv^۹z?g M-t"WiFғ/>;*C77OnNgG7c}_s:M2v2hB?ͥ'[ VP.†j$ k#5gA6[$ N[k4('Vu%u@BpIGabyPDtb(c[g2xZRu&uBBp)Ǝ>wmݴf`byPDtb(cZGQps+hBZ*$ (s[MĎQƺLu[hBZ*$ (R蟅l[71XX1X6o SۺEZԺU!!_n-(SBb 1/=w]uqa/?Tb;I{\.oػI{1&ѬqdL@xt]S)]ה8f hC[]w)̲| }Q5$ܺ1%c\?^F*RTNHǩ8x}8tǩGS:Xmp {Ne;97{ܶJ/RsboUj7q*Ʃ`0hf,?3I$H.I_7z6=aR9}gC4RlzgIJe-޸!o jl4^wg"Wu5`!jDV8sqzr2MQMM6^&#frOf1 32y$?jƠh~F +#_!lMˁ9vol_OUbZ 3w"U TaDGJhD(9$g6q{d7yUjs3#!Q~(Q\ͭY:UoÊ֙o19z< @p q;] qEDae&b_~@7ho{XxzE]֙u#EfJbqȀX[DEB$ #BXՉ}&pI˶K$KH2H%bmCc rJxvsRjxdgO>H1@\117,J zDdJs%3F:Őґ}" ,^+sm@i`!l a%R؉#T /\@`A1F""-TZ8I짘C-% ϳ1r#fFPvbXjhtDe") g#,f6V#G}1F3r*[9hB^RD|ZD-0kWoi%ΝZx\o']fFG$#a (P͉?ݐJ_`AH8KbJ$ HUDSF [Y5= `'+.kҙW/rR23_M iخPSOtG hD?/} 9,2peGatQ$6s`e PHM:Z T">1lb&S^fwg@^oy^xcυdW«hN$Wu YF$8[kkkkmF.V!뻲?~xr/K;xrKM~jfVo}XUmeˊ~ʬ}ϋ^"WW;r?tħnxb go~ugI#kGZy_qCǔImS>#zz6z4(=+v7oEz9#%7Wt >ol +<5z2˿=dYy5"ۉˡr:nV'-w6]jwe:Ql,rHc͗m%`T 9T@@t9&s 9u,I,ALv`~xƁ/пncw>竌.X&q|:f=K`ignNou?%aޤc; mѵ*6]7&7|7D9>لkNO<~wGo/dAnZ/Oܻ27 %YNȫtJWu?/$\I;oԍzssӵu~b8.5zY#\j]ʛ}vx/qIΖyN@:}^g7]\be.TZIY#[wO-c{WT,YN\ BTC<ُJ'K7Ik,mifwr® "$,i PG#ِ%Jbs8pIO>1b k5# /VQܝ*˗D1HkKO ,hL%a|(b@(oudztmIFo&W6K-:M&?e1vOUP‹L {Qp&rtv\+3թ=RW~p"ⅾ7+4ځ(k 2PcY ]Y"LM{T`Y/eЗۂ>d[`+}nOS-TH0G9VpfxB2Rkٟ]-+%5xiӃĐK;o7ZJ([ܛy{ <=$F?j`7-m媮ӳ0jBj Z:"\jc,歹g.9ݷ2%>Y R7q< 혯Aot,[ Ўefn꾘0G6>4]Q_4-̎e℺`t:]w#>ǜwKkHU/+FJ?sk 3w΀-zu[R:cIŸXݛl9UکTI]Z 2;.k@ ēޙ[; -=؋Ke%BJzegYSGm[4HHڹ͎c܆A$0r 8VW $u!OɈY*Պfs{@Yz qo*IBA<'si[i@%6η>Cd&#$YizSPI@m'3&:.=5&w`q2tpAa|"XPo;S!2?5% -x*tlcM<uxx&lv)dTJ=υ1FyJ J]u{yЁHO3\2IyhnHhbv~2];GV]MH}y;ưX7;g=~nt`l+ :c?DW4nEESN_o1ʱ wwCw3N@5'G]_%Mw;EZgkrذ|nwkR8(1(K@׍뙛w^qLZEptl#LV#_+{m"m3+G'Fm=xIb"O 5mе=_z O"ODslV6l[wD$JBJؗEPcc3mJ%RFV2s #e]<+!|?b*vԥ[u]VN|FV≺/F?tB(侅Gy_ jhOs"h֩3v'q=-fw Ǫf icElw}{"2aGe_e#q|IIYx&2×4?n!J]oʳV Aۍ\ ê& @F[Db8 m`.V6)Oҋl=wyuJl~;mO(.օvn&1/sb7Ÿ, aDA1lI"FHh3xv1_95ZM~j$mK+@G9>M "H JpkwrR{؞ ֹ|&PYN׷yG@aDS'KM'C nu~L}ZrrO0LJ⠾[j@HА={@%i1%lN ";l IW%#Co8mZwEJr kSɓ? fU\2E S$L]s+I eq[]'Nu6t0X1!tSݚ;m:% O4[m<`θz6UIGEl П!FAw.a` rKhhWu7VtQ A:@e8]{j~~&=U0Nf3Rl̬h^",У/fm<>!ݶ׋"dT"IJ_VSC-U"lp-s1ns*NrjS<0̃y BL}:#NdR!*UaX\JLYng74JrtUIJ7,ɓ ;z@\4 KN8*Ss1Le\Fcԋzb!|wWZO Y2c'mcPlx! @g]U6-iNb3?Eրa0YdvjNBq *А__Vɕe'~=z"n2t~݉tO!*ru:W\Y;3^̯v&ݕ])mLqO'`,5FP L"h&&T]P~Rn*j2Z!@ę`)aT3#QkT d/ DRœq]sN[ڞ@CeϦj]4ҧpjM=O5` !<- TYǸZޞh)4\(F;@r.A >k}~S 9nyF @E}e 9xMhZtәWMW:8ġTvg$A8~v_kUKyw JDh[[{hM-4uSqQ 1Hll; Q4f0 |e(=pΗpT$;[Bσ̖UM}bW krR@G._K$!at/H)DRcXOOQwf% ( g8*J K܏}?|hWTJO%}GTzQ}° 1Ƃcj47 3Pϑ/bʚ6_A UR77qq΋S` 1Ho@&e%J$ tLé\OM&IҁZ*OB\hR?taOl2 ƪU8|8f:c~XC%?3I8 ]+1>6`0 &\ جRBa٨?eo@X+̎Ɵ_=oAJzAASN U)Q7piŃʏl_mn;ZQ$oۋ7+ g>ЪN*DjPD!Xx~8:v:`PT^ h;_Wz5V1}~:cY6X{aHV3ԁ0na FlYAYMgcK~}=Q!Fd'}CWwFr6ax7r8~O*5 S<U" j *m^yYbQa)nQ9j0:*;;egc_ USk\pҥ/F”1qkhEvg93}F{kHV\V _qojXg. $_,q8\dg|E{E ~觿=\L=+eq'-mOW}B8iR +;3gn:(=OR+v ƁOR%\OueyV.l[yQ=N/*tG7¡|4{fA?dOۏ/-?xΕp/x]C1F&\PEs`A ~j@@.K;ƅ'yE3v״ )n}lEuJ_\ALwX$:$+%0t^5F.6Weѿ^%Pl.|& 9ԟ7wl$U,_{@w,45Oz!W:R掃 yA웙CFL.{c?`~;bjRg~ .QS* {(( ]#[0_S `H,fS[qG~W0a!K Ewc׻(P^{~”Xkia~X7n۔D@GܰN`- "IHuԕ/Ap?FmFyYBhcko4f "ls)zp _H z,_o%o-w-)㭴 M2p[:K|McPMo:Yܧ0`" DDbf\M80'6ўR%D=U>ewE{~Xh;y\*!*. 5@]6H{kƤY 9bq5#E7qM7,Jh 5:Gk3ٹ<Ò &[>4pa,Pcv6>U<3(!f '~PH9]ڹד,_Tki>L@,9RFXd$>CptYK׾L&CTB;oR8v0KgCT$q"2;!*ZeKD4CJC15s\0 vX OI0q@_*F"QlGbX[ԉ֒k/WTI=ԟ媫>ޕh.|nocZbRė3b_}ОƟ.{[gM ?Tď_t2:"KHaD5\6ȡӉ,A/ QQ'$Y,'㳖D`O20f%,f|+mm3Cx5fY'ڸoԙJgle:T~{a<-/%)%I̜ı?1VhʑML6۲b)f 1`20i9jRe͈$ Ma K ư0Xt(sb?ք{{D+]M4î8QIp2S^{ja>8HRm?_.VOԚ\H.jUjTŧXUmi-NM[uJ2X%B./U.Wwdt_!Da:*NkdS1K2pE12J/fw/2EBh!= \˾n=L8Rc׽{"(õe)"$2iqc@4!Z ΩUIVX!-r e6v֒)$5tۄjS )M54sz=0ocʘ$}ci$`MyJTYƸ RCR)@JYn2V\H'%{ۙXfZ(\:롚''`H>J%GĐl*}MQC6K6CYC MpR\Q\p?Afb&LYXa=m{ ڳ{dM$WK0cZ`H(Ɋ* ԃ!x1X >d)13E o`&tA]k]lYB^ssw)"o`6Sv>[TQ:UD#5H\l*Fj-(Dqcem"֌qfm&ɀMC稙>$kΈߢRԂ?\+~tx&b4\ljʹ?%QQ{ tm# ifA#Jf@wGnI8#4 #75ZNŽӄzKLH#VQ#C(J5GG|pǣѬm%*RB%Yr667IE@&ɊKҞ|>3h$|Ũ/4gVmAo7 Xg,u*6Ha&sYA Y򤼷tQ;CPIm;5 oFsCPeR7-TRg|SzՌ#"z-3XRtD_tGK )SL ׍nQRZ'lUZǵ,a(|BPDN*>,էESIsß8>IIl3b|xi(ħiZ%xS#ܺhE X %q39u%1Vsߖz8GMQs}= ID۾yjXꃕ-*dGQs՞$=SQ7Ngۨҏ'> +JQsв(QG7PW!yK[ yx %L`KMjPR'ʹL! F$ܐ$)"HS'ʸ[KJ)V`lXƳLX@eDʭRT8,ՄÅ IMӌ&b 3K܋^ydf:ͱ7/{Ǟ#'D{uY5* MU̅"~iǜC)L9dJ9H$E&3qۏ  >} 7Xί"lˣ]r闇[_RO )WeuN/A w_""F8^ G! rs|}?@uكOqD]? tep{R !_.M<$'L^RBrO&:e5R; (@Z˺.x:֥<(b |6ܓI =ܼ$RR#&i3 Q * 8*쎽%]ߗÙ*}~g-p5ǩqwa^R!$ӱzڕE)cԇ`rzD'.Ɍg,z׻3 Bu~`vӾD3;VȡSCQ'e!Cйx 9{> {!Ttթ6fBXsc08W8[<'W7̊@3.T QQ QFPwfo~ݢ]4 %ճr"WSq!'3>}U^1 L.xXBjbM1/09%D u7l9_8k&?II%RmiHY$R:*M&qjd“9D Sl/gu%E Įfdtww?>DiQ`En Z9bZS:<^Y+uuj{̳ؒLR븗 )c2 8x"( & lj!AYbm_l)JR{ U8qU\A7RL/!L1/#չmYU(c=+]0kvg 4z^`2"+n<۱&z)ܜg"V鬖qo̯S7}o:7gryUr6_<)F7_e_ ĝ3xwO;zPq:wx q/EY=Wq^1X{2">b߿iIlIlT3[UsU d!M;naݓ^L)'jט(f\8Nɼ׉LyIʥ=V.tTkt" crBY/r-񡯑>Y!ǿx3t߇Bm,?^b=wv@m0O,X#ߎ'_\lO<'?]}]:7뾛.jN@twZ9%wEdf%g)2^Z)߫ZiZ-!3hYT]X G Fp8* crzmCHvJ aNvsa7ɶN/%; crd`cG-DŽvh \EAM"Cۡpky}J;VRTl$O??<{ע?Gaٿ-g /띋_겶%f,wnZA,6mN+܇W(Z촘؉N$vU;e% CB(z+P0AäN/^P2caK& V*+֍c|1̺텔3k*cHǮ~X k6Ȉ_yц*f`oEg%NTial#Hą׆h)ZFh g(LGt lG@3njl\!ݨCK($SlP~P_?dtf"":jXs  (!P`X ?x9r)؂`^ZmNdОvew#s̒" 4 3pѝY$%>qaS ԏUs*)8Њ@ A9Y Ċ1TE=,rfYYD7~%m+zQBp;8 Vq6_0T`hL iaR"s˜Ծ;ω0dcV*w*PE\TKJ(ޕc2kIlBjB o!A;p= qY(lhάbbL;)NҝUJw)h5bK/DK\Q%l-DK^-\hu?j4I#\ FX01¬Gx ICyj oKkI[?롈Ep4 "}~_iAp\#@ ?&) 7{<

51ssimQxq^ݣWo 8nKՙinz>v߬1rAG^ƗyXϳj8HTܹff'>.b,'o|rf]sX޻uz1HZz֢w>Uk['.Y28kFgݤ.ZJ ||J% TZiF#* 4PsJ@Cr$ !gэPtP]_]gZVUilD!1 ({;Fh@C,Q9a**rl3!;86XXM[ u#pp_F Br,$Tj , WH#T ZB@b}Ƭ$R^ݞ\ ZQǡΌOUXH]~D y"%S5nAD?L cjݍ<6䭆$䉋hL$Gqj7&Gqv $Ø9F-m(B p[.ɾ[ E4KpC}FY+Lp]Ϙ/!a1N!-+eH-zZRS PXVBi8M4fRuth)b\Nn_3YAD?[, O&EHoI!FA0FuPpVpIRRD9.Fkn{=kM(Dr1H1c܆WsQW^%znmH,jУOݪ=<, i>sZE"SI˺e\$҂ŋ}E'$݀b3vj7K|6\O)ohRPL|@)NJLvj- 1NN| S6n)o'^vsvR︻bcBNqb"EPa jϥ7ދ C <hpV A{k)CaPuIWj^zJJEL17U`9Q=g `X)aV DTJ;^.6 0O=,GQ>PaCh{K̬…d0ߡ1b@ߏ6Syz֕->w[_r[ߏ*,Wh~&y~/&wZn7GA/͊ WGW<- 3\};-Oӿ3<'on9Al'N8 zM>;10y'NE(NP+=L͛~ 0WX]^ a- DY*@H?͎ԋnD bşӒ 쁞J7ay8_/\y=~}AY.g.#"%i%i=T_tam2'-"*|iush(/[ug|(¸%0*IA흟YL6cJDaXlQ*MT僵e xp%֦]S۟ kӮ9JtHh}N1grHc߆S lpSߪv{&aKx|+90oeH=(>; Bݮ|g+=\eJIґBG&OVvY4]|q65I-HRǗW?D:ӣ )ν!)F%!Q~O↉alXO~Br+xb&d~|p?/N+uPB>yw-aRWqk{?w-@ʊṍg}hϼ0O*BWE16c;?:~xR/yuE1}I s_#`JoLiܦ05{>tU{[ˠv|g]pg]<+D05Umn0R9w(tTw& yZAYr-³b0v+2Yb6 RM|Z_U]$>ړ98ÿ?+P`MAA;=e܋ FOvFq[Jh Ê J$U*QR]'>u>*-]هTއ d\yV\-Oun#ə=- k<58gg֡aqf'rCJ+:$|)RJMG)3WBNcwn6*6]9u|=+qARP!l6JQ 8C3g)^ Qlĵz=jBZn:VvU[d%Bȃp0+e"pl0*I- 8(/sƌ4h̰;(D1X4h+d Ju!B> h}@6ȌhRZò+U$xy6 _:W ,~6f'јDcvI՘m^TzVnd(pNV\,Xl5$dsata)6īIEjޕD_^N6e;܁^N6 Cb[$Mcin8ك7eQ(KvNmKi@҈BjM<`ᢐFR7eޕ5q$鿂ˆP݇"$ ),{efuJcPfcf5A j̐L2*+aUXXt9({׸[l%1t(kgP4zv\Pé6{kٜzB&>,mۚ[ UĿök*a!A`"jVVbfR՗Ս`߂' DKwxcl6ڐ7ww!6$\B7PӡD c/B఻# EMZ6V$lu0C` Rꀃ9@պ_8D!瀳aQ>}lXhlͼiH|7syiچ(r c#m [5!Ǵ@]Y_N88bz^[UO*qy'ݢ&kkipJV0TaP: cc6N:vX(,r_68gQMG/@PKHRh?Jej&6d/?;,c⥂ɸ)'HqU`T{̼ZbHo"҃lr8=4̱VA*sRPbCw|TUJ~Ou&+\40+<(מKX{k_zF'3:IINj^{$d3;ń [I,@M`NIH4 lX޶Vym;Y93'*tTxy H(üH!<6B d XhRV&\}6=KXK'ҀaYueEf91pL #i@LH(iD#SS4w#C$GϧbŻmbԚĈP"y; DANuH4MY@?<ЊLuu*#v20D}ȃ4)ˌ`X2A 5er@RaUG1%5$Iʢûm57[Ep]jSa#eUS&z{3I W$]aCi6|ObG^*-Ώ{;g>%DyěY(2.o`El~)`-p;V ۣcc;M|ro.Y3%i§I4<%%5KNr{uM%65fԈ`֛f')jZۼ6a<緣S(Xzl(R#04J}(!5BE_MK 3,)P%B{^V9mtL@b:M)Vj#6-8?Uh6~xQ4܋+x:-*n@o(<ܞH!nCYmHqbs`lv0x4ߞqv3CVm8qbS7_\6'ta1.5І"xgPPJB6(C_.f }z1(,_.00^ 4F &:C2/7c&O"Q$Vϰg1x&0" 9kC)9(X$Z˙iSFIB"o1(C62p.OPN.V˝]h=srfl-+8'E/x]$D['TƱ ]'+p]2:0B/Nhc@0" ˁ@q:ZBU3ON!bHT.Sx(%rdQDv{0F9p6FʉO#ןj4b<\D8cVAD(fN .F.1`T_iQSVT7baѵf1`FuÆSf9i0y#t$&U?łhq:B\+"؂}]:ӓ3\$IMͷ/rF~89XU@*غB8݂Fr?#W4&! N1`ECͽm7ƜhO1xt LPO0k5[oN|V)Ѽ:X6Spg O ywN 򾮨8Z0CT_ {GZ,Mw7{z};~dּkNc۩U]W=ia2ħ ch?ݥYN9{zpa rJ6Tocgkg/hNL.G! }upu< ̄KDŽz6Z1q앝rA)-)V҄2:IblX1 '-!ڈh N)*(GSTGAH4%̫EkmV4S|(NJvVor9|0B#/dr,1^:,EQi#a0L*ètHoߌe&5 nӹuVZ!DkI__oLBpJ="dg R)d@=/T0.pW]y߻ON^Qת%)vK/ ̛$3o̼I2&U3oـ]\ Q :&Dz!(.RB"1&ԎXblv]OKζcyQˈ!,? eoۢsV^֝,2"koM y)%"i=o$znoF׳y_4ߗ;?{^M/eׯdZ}ւxA*'yfR]ƾ=qW.>1ѢLu)d@YșzP:q]tVZEXꁔ5OeM0P5y"5nYph![=-jAOr4e ߤ|smd!"$1 E0/Dȕ5+$Dt1!b{Bh빷\?,,Hkńt,<7$OWW-=0@pND<̄ISepRKF` @U=YqLR},f&&b)ަ~=|zoּ4Sǟ߷}"y7կzys~F#GFl/fr}Na_;?oqQz]Tߙ\/<ili%]HKf@]6ظ֦b\)D]J˃iD3μ_'iuIuf^{&a 6P4cA1(#&jaО!s3^#&3v*m;"L?{Wȍ C/8}X;٘*pvM&><"%)RBelUėD"+V0IEd,|+>06R(&*Mƛ +p'&.D{MfC2Q?lyz) zy[Oֳ I@tBҖ7E@x2 ^˩G/z:W%ۘffRݜ {ѹZg7퉽ܺlt&n=)%0%vx *eNz|OA3h1Ŝ2:4<[0zea ;T AjPݡɗ?( ނpTpҽST/-2cθ!wDI5uI9[}qv(u41t "sJ:A zl6i#9"vN`j#H GIA3 3oU^Es*6/v.TS_AY)qGʈT30|WP$8ͯAp':.4?Ymj]n39Zx+ ]IGyB'ypW,gqqܯsb-eg!ڕ;ֵ2ު)9ݨ:v_A$+鹫..\S$!r=\p"ل \RPacPƒv6l&O#hyTyd~HỤ̣n[n$kgP)Bv&4˃VWg\Ձ~`oq1Q%͌r>f 1Kɧy,VƀѨh!~zgqOv4P@vlFYr\R=lճӱ F:$`^p}nv&`-I_5`/<ʅz`fzKp%nD璔 yN Z8\O|{TnU ԓ,6 `\%GGnng)m! tO\W X:d1@ iUJF$H *Y`!)Fa eLqT[˧DžĿ>n_;@gÕy!LWܫ=x l\PfQ]{)Վs"+Y(UM4t*Nϗ]0ίF4=C_]_^.{\ l]q+]HVhArC4_sBv^"a =)VR$TPUW`]2Zˑy> ckpC1$;1X-IWx.0p(Úxŧ=0'9dU?2WymCoD1<{7 urnAC*<9 -_]YAQ+ڷ\+yf9{RtsO}x~^=wNm?'?4[|\ado~b9[CLzv?p2R< wDch-@>tTNme-;fׁzNȤus ҅@!B4!jZZ*ap('qk4,!)tbt3/GOITWnhw*pQιoД voJ RmU;#JwA`p8^KEvאQ)9\(!zxۯxS+y&2A(bp׻'pə* |.LA4kr_I)3m%]cx !ǢK/ %ѓ֮K )D47tO~Z:v߾mA57~UϿozZҊ+M}pJP؊S> **sϚۉ?y|'t1RJx"#IV$)t` hL^ _ hЈp2-Qɀ QT7j@DWDG*gyz{aH73XڝFNKreD`h_i`j8-0 U4g &m0 ~ZٯD]`ެ˶6&ܵ; nxe,Û; oVyYO }xuV%iS V ;E-55%ػwwbBxx ^il ]ul|R0?@wƕ-K\:}_c7{ THڎCX[7zh9U_wm"<-Qx:͞y;@ߌ}}(xzڔ"mˤCٮ(Bщ-Ԟb;x>#Qx\/yP[C]Ey_mTjUإr)-9JK}0V<+{~qs%?} >z&$^8& M'C % SGqC%?g,V8kz҇^zܲ[oǔpYD`"H`b$_1RѯEQ9fz_=#5qwRX8.2 _oHCD)M;N )L(u!w`0^Lg4_ag|q.6 .q5+ B;\O.s}9wHrƟMO_=ߛIlV% JIh06E)#TLIh3MOqB*2 lS̑Vq]A/e#;ةS+% ٨!'S3W)NN$%'Q <VvNy)#X bGP79Y`t_S)OdJ 99pKFdnq"2\SEdzɋ&RU)BQ Z.3|Ugb^X'䄡&ޮux/DIUCi/%?mziu걖z}dCUٮvTٮ|hcT ^QvŚw^Kk1M=ݑѻBM6Џn-JzCrFjQ&PHsE0F4F=`A:%f#o@c*ki/7+@iMiո[4FFXz6UYs5 Khbxbaa&;/Us5#<*W=h ͯmkh JfZ h0* \nq(8rfa77 L%o a5٭5T+8^jl.9l嵺l z%ٱÒ0(PI xڤ_nPl\x 2?GM$XF_vݔVKEE>$> 4f%9nr[a07媁.ykA^Dy5ﱵ9;p( Iv48ςݷo6Dhd 0HSgu2(jB F])jkAj .V_u9[ ~|=1 ʤuuS1{-BD!k8 ; 9~5*nrrXJKM_1!.VP+(mB#Y4Z|-QC'2K$/G`Wʱ AՉPzir42_Y]ML$u\3r݅^eԳ3,EXx|G%eiR"5t[ リ|D+ yVjxu²\!4Ȥ*C4ZgR\(c/1(%Mʙfg|Bde@tqMi"0QLՙ xU3[#JD3j,7(AP82$?F#? #")Ur&3JqD` 8ID  ЂpcEe25KL40P4H45"ɉNOtNin eMkaLyB`00 DI4$\PDƴ5* CĖ\&wd["K16yqIY9y1Ag2!g2ŒI&x<&0`MRkò" m  PZ\| a9N3[vP, B!}tv5Wd_Z<=}{×Ery@jtu?m9;Ǘ Osޛ?+='~^~'~.~}~y' us}~q4.a۬Z_\z}k'+ Ưvq_}:Nܣ%~ eIwd*a6@#K@)O4b05:码:eMx]c:Rǝ)qbwPec}j%1ց ":GQ )Ő%1r%[SA фs%;qF޶/;qUtܹ"__pbSPb_ n_ &_LWCa2{(`SwSLPb_  _FIP\D6$z(_Lz h d$&,IdJ 15R"AoBd6SH(7Q)҃WsS*SӨk~19 J_#\XyX]2^KRrNz˥UfS2ETi˧)BI\̶fN0LDUr t`ư9YBpJRI.CgK),9Ѭpx@S`Y ז邂 䰨`YS ,KUqE 6K$T&k*רd)+닣^ʕ@aj"iwV ne~s@WʏgŇ~<>aN曫ɻY_9!h " Zݷy}[E{b-2dzr: \ڰ()Hy{~D,E{߀Q4ND.qq6Y/oW=?37mGWi}mEq?]קUIR^;Wzoz ZJ,'xqY-]fof^ݗ?.?nv}&ّTP=BLXPQ #N N$K3 ©mYdf@}J2f+I|T>$Fɐl %[3&ƻ]k5@r0P+bHR^++LJ +`w*&kX9Iœ%ur¨3Oi Kش6Ԙr9Ag[^+1ztK."% :0)] cJr|)uu QAX ym3`Fĥ, /*1a5 ZubTDkz"G=Ws:oQ,Kt"M :f1% -7HKfTy E )!(t 2C p"Amz^III7 {8ƨpW]u\2~60zs|w52.am(yk]ů/HV/Y!x2TwO4{co! p&oge :CxQ$d1]cDOEV2&MfͲ,8P n6  SCH?.[W^,Y!X Zą⸳<*R18q-S5kqpʨ:Ԍ3 ijP!)2!ha-bwL^%. s?O`35C>=mkbz\:c:C3-vsT8x!񼬅 {%D>,w.Gpﭷdtd;r݋.(V;8qp|xkk j&&2ͫ:SV`+ rmGBz WE~'9h?>"oi\ܛ y@j]q9qɅӠJIIc:I)s h\L ̗.5IëT.Ʋqqu;Hzb㐄Ow!}`R.wWpO |ûag2WSvj+so!{ΧEEK\|V'8I%>?r >:/6EИ*pbݸ36Y{ 0!9(1QMgKyB%钘g],Id=!C=ʅ 7j>)UDig1NQ%N&X( Xi9b8=H]2N2JĎgCA"jTG2!$ RFd'WyJQ(ek I nMX6M75 n_`T{qlv`t 3aNABM`DK8WTTinOMKRd@=g}?m-uϝ! *~As˻()8@g emqٲT&-($@-|ڛO`FhMbٛ?P{Z /ٌ#ɮ{tj~*̈rnY" ZϯT> eUy4m)7А:`ŽGr$/jGudÕUS _?(+$- J=-T,TOhLw㼇 NR֍sjDP-ԌG%NCC}uq#cy& ѐ4=FEsZ@7 |`{,lVR=co9JI {SX*X![v^vl`uqש5B>}pzrσ;Y lۧ]:5Q<:IduՌ6u> +d'& ğ%!*8jO%Y #O4KbVBPc ũ #"$4"ьduu SKFLny6(F̎w3>\:&ӨTg̎:ޣ-q93`@)vLs?",Oc((LĤ&k#>S-|F HĄ71_OؗK+%[̝= / PT@.<\TBrj@43l7Bsf5!T,; aOȶ(BR]̵[V]ˢ1ۗ-U(uLKtSPFuK>=BU0jՑ~ѠՏaSɦ~#qOB\> q$j\ήnG 9FJGΙ$\dZO4S2T< E5_l9|^jzbŔB͢or/?|rBtw%; Tu;ŭ8Œhj!2, , I! #F0ՏYr,%l#8 ggYCj`c8T3 gt91dGh;^:/.o|pHBoxJ[f/1Q}Ĥ[0l4dN ^8`8ƭC\_2UFQw9Wk˵~oߓ\sn0mZvTaG+Ԁt:-Ƽ`H ~O$O&_AmxS.BFnBG]{܌ bIgC- \h;(^hFϻ$U oN4 w&n$MT&U,8츲TP@L38"%'JrtS#2ABb贴%pt&P_)S)*W!4DOS̥38 fpj*IB͈yD 3ֹ ZQ%ezIh!9V/JR/?\ViWP1y S͆VJ΢JuOo~|;q9e{-ݭuo^ $z; H2LYwHXG2o H 2B˜Tz(Y#%Ds,2Xt :1a c9 ìZpS <#̽ 9uZz8;Be2),NpW,rL5Ij&]kyl:|1UObKǜľ0xw~,<_z¾͊veӃ5 r \F(vabW3_|k4]] 7!&Jpx0Ot1ogyOw~e.nѶJJ71*c.aR&?];p-Hɣ/= $!Ó/]r$Kg`rpF/ob0.$^*P1>:K}QRt r 1iwFe]u+s<̤¹efh$4!Eq{,C;H" 6Q9iOlrc39TBK0\$$ hd69F`e>T ⹴H;g%FB Ti'X8#kAgUo@7%T0X3 X~1[hFNPl#:XNDx=Wcu QeۚC]{AGP ZדּnĄ:53l}(]\"w3ih5\)( F7#c8Pf.>}ĶfLM['Hrvۭ&)ѻDsPqAH [ orh[BGn b5·ĶnLnHc!di~8fYh=u9Rq(;Lzszc_@(ڏ/jA1ܱË B)rQ/ eX(R( @EMRɷW 5a j$қt+/iǑ)zg Aq\$)h7r$(OnMfdHJ!lr.Ica±0 Kw?r_ތS7L̝qrO]OWAà%/C1ۇ>|r>;?;DWgmx>pWf:ۻo g>^"["dU%z EzB-S)CSJyZXy nڑg`M /4=H%NWدaSX'XBk?"RG ¿_كۼtn+9Bs-ӏV,g8<ʱ#"υ̅ˑ! pH DFa8AG$VfMC PCu;XJ B9@=6_N- wاє- mN XVa>5CEhjZs{2˛2/jm"'7<:F.)_.mqbcb(UCOW 8CF`;)%C촖Ďvcy)5jjAu#uqO]xYgo37ثY[3vpwնUΧ3WF]Ѹ((kAv5~{L |7UztUU^WL/m#I"+Rέbyc?]g+:'/ WT8c )sP((e Ym>jgmӗ(g%c9 ]_20V v ɝD$' TZ5"i3Bs2#[`G^l, je3eDd RaA0F:-Pҹ^169q!wP?AڬuWv2Ny̌PΝr<ΥVajihd2W3Y|fy5@}=–yt/[1ɢ IG$9G orj2K ֙Q 7Ԍ9; FPZ-Vpլ&7@J+ej13/| k9//g|x߳-`';Ϗ#"VTiݔ@=Θm_n-]~!>Ԏh;vx:q 3N+dMCD\PYIgtlM}M^NhRMj!5)#Q9iv}˙g6ChIBj;7׷nOSڲvۇ/}mɡ]`2=x=+Y_A9WfO1Ry0lrd\ͣGg` a${n+oݑčǎy>3mT;YMIH+7ꓱte]xΗɯ' e[Wxfm}F>-Γ8R) ppSXdb9I`!(ZhѵHӺcTs\t}Tz vnnZlG=kC8~5ш]x&X;?9\}DДOdu@ӄgܝ ,<:`iPez&FycLߡW)n[gH,!Nܽ//y5&weAq,g*;[<:cY;a\(BȭYh_Qp[ٌعPuqgdpu~ݘݻdnOE11f*t]JP~ *v:Fl~_h-:d b#v)T"w COھa dA'kw^r v1t1AJ˴ _?6<*Nջ_{X4W\^9:,#bA&)oHw Di"4#ߐ+Y-T\# J/??M vY PL9̢b<0)؂q4U}@g2<.01l$UÇfJuG&s2U>x՝ԉȵl艼*R.i1\z44+%Γi\1\1J L*ۑIÜ[/F >GHG$J,m [`|:dLXsE(QoJa`Yz6BNyLQ&Mk va t;VfVr4\%{ŋdyQaSU@suq6Z2)4U ]j I&E?iD^:usY_Z:{SioYn9tWgP+oyVo'Z/,SLFM݀eUujFܱ7){vgE[ϊm=j* utcAx Nb 5F6J xU9U١ ڳC'Z-w`T,*F)ͱ kTPq]V-2#_ f9&FQW'B2mr67@ C5ʬW.!˶"ߝ_꼚zk=N=Y3hdE.RDiCYj-KѱQ`ej8r_1YnuWu@?r$ ŸBK}OϷq /i1k(}/Htxs.v?0孿^g?̿艐n8g7C%ISAІ<߳:xu1kc`8Y@vs1p@NkԽ *l$6?ik&,N*] eROvF@:RN5iy݆C ;25£Fy1 +]JԸ禈`Hշw^3VV)V ̔n+/WW_v)З2Zx;[U<̛) !FЩdt7y2P>ۓYԱcfbQY7v5`qxZ,ŐS .dF{,)J}LWgMfўqaj dg3&.*.Ú6?1ܑaΫhw ݨG9Q ٥ێAHPW者oO VD%&6(rⰱ8#rFlcO(jf}Scr9sthϓ'w}oGP\w# ݜܾmXY4P/>=ͻzQŇ s ?/Q8ZeE_(A.3ןitWM"Re{K,S<`8=Jޤj`M!xzAnRV>)Rim|8n1rY.O9r;x[sfsO{S,=hmO JIٟXGO!q_^H1ř Aտ뇖 һ>D̮/c=9|K[F[1d{ ߬l֧jmQ~|˙ηfYoSW:9^_^<:N(tzR {zV:-<{΍YEZc2Fw9t^ TǍ\г+اs=K;gVtz8YguVYqVg]g7uc22:# 3 Ѹ& ) 'ule_wD|E>\X`鰻\pRMKmV&+m+w9HH9FSQ(]Qae_̬ca%4JW9TXx`MAp1/|q](te閾A V 2DA}>=AADmIĦ)ˤPE4 Q|iy EpCCcBгa B') +DmӶI9NXEptqNtm=tKdUmT eTO=oTT~WNgBjuIߩ:g/הIX X 0_m/>Sp v.U~–2i} 2eĻNnZi),2?/Nyb˻u]~Js̯E i*/(ZcM`˂ "ANs`\cLNbɒ EI =0֢F{D;jHEa FQmv{P[jYRJBR ^#;HNX|! *'cHFpڀҫF`.bTmǘJ p8kHI%c5c?$**N1`Pv,jPUt)2c 4 )؁ S뺲Fa>*{ٱ^ ccmk@]K_>,,JPDbFI-t/CJ`cZ9g.B`ʉA"=" 05`VUfy@eΣ*H/F6YY4Ѣt>:f>D|ɑ^i>0KfTcL@FRvґ4QBQD4wN %דt43GGY!OcL.$!m$+/%ʂ{%1đ%l0̺7f%RV=e$IV8q>x(nfz:[w^?J4Ǹ#3kΡ26,BFo W uUV8.R85F $b 뒨2ZdN>DME)y:$<3v.`;o0Gq]v i")A{H(ŮOGI(FTklJ7Xh1>t!\sBJvݘH<UubS:ijܰ(Nnk9>^]ŇL?jJzq`.+Xǀeqj.MhҤWnyuo'oՠU(ߞf8qCE {G?K-ܤҡ-Wp`%Z(SqG4>2qLT2j5'fL"L6̱(f{nu[d}Rl9FRqt{X$Rp2='lH ds?LvVw.""".<Ar+J;ZhDW ep%/:(hHK1(StDN|E:T@+S l!+*[6Dx1H"*V5.oV# p !>I dM"ouN&e rh`,Eקְ]dU}fMAZQZX+V3k SYc}d5ՙ5f%pjg82kt1NW->=Eknx|7#Zz177Axx_v[/ HP:/}lZ&=LTs &ɖf’40ǓSe'xliU*3oRģxdמxo$IDL N2X3^.v*ؽOˊ uVM}T.»uV '{*̖7W_+O>1Z.5l <Vқʬ)nlCg}aVì(YQrf}h -8RwLdMYhPDGaҺ:3tAYHGeSfcee>~=pNb&^nwrU*ĨSx,}j6Ow̿ٻFn$WrQ÷rbKO h,IqfzڲejI&[fSU*V)`h.I'y3^8)teh_oehɂ7dcQmrq3hxlU)(i}=J~[SL4ǁfJ1&,SifgɅfwgPWWiyXjBۓE]*p*]M'20IF?\N{¿W2Hl:w-am-LUZ bjn>2p*\?os+|L?"}b/~H*6P.$!p=X,x@>{@wPW־p+t?xu|ο&"W~ĪqO ;W|Oص/xg?.O0Í#*9ߧDt% +Rt=ڴ—Cۻ:0j E/YQwM"~~ǫX`ͼlo $32H;i@r$JQtɝv::/piû_h߯^>\W"2`qؼ[]?.~.DH{݉4k`/s˙>jM D#$t`.0/*_IYj\T*6WĨ@[Vcu2A?mĚ\M.’чߐ9(GG>kI2MY C.᪛͆W8"Tk; *PDj1*:jtJʄqxBbFF'˛@p^ף(.LYߋG[H] R*JЎ(ZJ^[GGh?r^t:7Qg!gශi:F/y u_3rIdF(+g[ C|[Q葱;KEm`LtS #,ϕO'bvM}]y98+4tD2eRˤM5hvD d2=BE+^P RrYW? :Wq܎/[!3%^z[H>ti1p[ՈɚiVLFP _^hQ{ƭv׆JNvvWF[X(z3Չ,NOX d5ʞJiAj- 1aV *DZ+YAÅGV@RJhugoH4aSPW*+tNҍ,Z aMF^)Ìg{AڰQ& ֙ۢ3 ߬JwV9ɧ@ն"Ί]X.eQ?}} ;݁qlXxcѯz,>bޗ*T$ݹgѦm\Jku ew -?]\_Q!*$pʬN;\8^7S#Dg{nڛI[}qKL昁6vlz[;>7.nhs-b8NoM~cݺߕN5n]oyxFe&rї@0Ǹ٢? xa"-\DC`.hXfW϶>fʇҤ3zֽAiF4^'w%l|'޿g\(>me_]C7WǟXh]>Rn Ёɖ57] o=cӤ?[3)8/%h6]$56T4AJAm" u*<"yɫgwrz"4[Xd090ijKBe2Vo' ShiEիe6z) 6N*g+@Nxh;5~@!܍?nxM'fk :fůݦgtJ!lGܴȰkyVvlC|cR 1KRb" H/\+je`0P!E㢘~;gX%=w1Ce;ICWZJ7Dȫ^!AZ|_&L})Zmx%TVT\WJ՜ -ԞFGTTJj^Ej|K:2̘HÌ)u>!ϱ95]y4f[䮳v)nc4X#wѪAPUA":8׍ؓ Uk鴡IKި(=} =+j??]-{_|-;\E!{mZb=˧p8gZS*ʥ*MGW q]:^@֚H4nR2"OUwq`; - O N8·qvdh}0~+u=YxF&BB32퐑 :4{tV#мxCΜ|#rΟ-PKEJ|Pvޡ "49\.(`T"S92F)X`APЎFϔ*L^P /R2FE픹,U::LnЏ] J7D]Fd"ApLP딊WuBZ9\̀VY k=J ߣ|܉ՙf3sxsHآW2WM`f5ԌAek-Z0]WL@呫bO7L)4]Ln]leGI %EMgOWESDFո[m"mʍ?=iP=g)z.H(.ŮJM_FEjSkKtPu:ijC;6 8W2j:ځjhMOgIagiutbj8C|/!h+g)&*f#p ΏȍI@9 !)L _%tȀ!u͚5NgHhҁK)ѡK8$F٧+X;#1: uH c1`s@i%m%%;ޱ>8hOlSaONa= Ak9d9 RP"sIe5 sVqX|5î4ؚNdAI0|>(ޱ9f3c{."B}Dպ8N0{bq`ǵB)2 tvTA'tBYǃ;4*8mtgrǝ^R[j ԽHBj!! -7v0iiYkӳљ+4NGO?^E(R.L0T~~=k1EV IL2K|D!)GvcI搖 #ju稰@T a !zNH&DUfkϹׄ-9sєm&AhQAk"{Er+~R)w>$nd,ae1A\O[1QhHU$8=vۆM{}Kn&pmAM>,g//Y>p-%U6ʹcxXQ#iг,j t#B (:i+3jTwT{ϑaؾYYֲ<}ZSzF!ּlPWA1 . p,Fm=dYm꺃)(XpH@BsWso) fO劚n #hRWZxu0K5L^yt!,W}g~s~d:=yJYkkU.=.w :f:dG(!'pm#D^wG NticϞ~Q(#szd4F7-s+[j0<jN 4]զ&;ޱ`ug`+^[ңŤTH<м{#I5f %';ޱfH==J8ľu 5 B6D: J)b>~Vxp!*!J֙IA9a# Ѽ: K,9k `}y:ORFz]/Py>&d̽@1N;=M~B/׺ATS3ݳ3hZZ ͳܷ'a0q {Pq"HR 텮(l{{،%;-#}z3X( 8:w=DF"B@`YOkQ.~8ÉC'/EPֳF¸퀌ڞZv1㲳cݻc^t^ݓ?u6DzWv%g\S{ҺPf@l4ϡMϡ Z]D>lcu%dEm?j`=cckDr\|B"D2Χe8%F (5#lEl- ro&;ޱָCY%=W67 q«U>#CT:gjCSUKՠ9߇o9[k雸[Skϕ P5b++aJsU)eWŨp z 󶚻/×EjYc##y麃-7T~&-NqC&cxt{MMыi,Zz9V}Jt 9Y_">TRE6rC-C79h5Zu ɗKz\j9іY$}lhv']v^0iB*TRluE^ /˗{e"*M8B8iz:;]kw9KvW:hu&3 є搫=iOɶ'}|!cmOI'2)T1M*T(Гশl5=%X*K*b(;J%.Fu"cuC ^ÛtէHz`2 ?[+9X6R5Y iju_ ē/HJz$5KH3UΫ\ 6,}W4AH۩Y (@!cGQГ+O Um2'gN4!z|ycdRTk)2H0w^K3J ޮ1RgtyO qT1ldID|*;Kg8J3&,S+5'9*5_PE^bo\̪HF Qyc?f{@ꧬ俜8!džL:%߆A! "9/,Z/֍qgD,<,I! S J{fz6Ǖ\L._&ȍmLOxNσraH/2=(ICAfdSɻ'XgsIZBzyXBڗ]:wȬVkW\.9z 5P'6C{pB]uP3\0$j54Q+"uUwfb]d+֍pE5d&%RKkSD _*X@{eA[2z߆FF D&9Xu1*FȊItxAaM>%@\j +IJț, YR\i9]Ay/*K۬MWca[i&lŁ#-vnĬF1#(5һ R7gnȆ;>{J#kvwkhP^H!Qٝ_ߒc5Cao(4_lozSV+Ɠ0I~FvW-Qx"!%CRmf27MTcnGw$&r \KL 0T`SR0.0CVS\3¤Q*^NrrC6NeN%PS+9-֟{)2i7WQ{==e 3Kb"Qq.zƲѠBF)(J:x\j +X\#ꞛ<ҡ 3FjP,gNUwhQy 5mrᐥ\6A&EP[#U.s j n$`/>fT8Д'\ J(BLil*u{kJmRܹ gk=.N>?p96OL݄rd2+7$lS89w[8Gpge8Do/O ,8&*?2c\y=ZE %\T53B{ޜXt4wH <EΖ2 [B]k+vqbJ4 jR^R)0 .7&EQKG4DS*몸I>@;_0^ey4(_yϛ+. FzE0Ɉ$,34^r,7}"PG"k^{8yt9lYURJ^KF^QnqEqEW>aRO>|,C0)bPvt/dXIENN9d./+ $.KH\kny5U}C˚i))e~ L(:^sW|^8ŇN^j}vȰ6Eu7T2dx e)Ip/,끪"8*Ij,]Pak{^E rQd%rB@&Է7\5<׾*r,N8?PC6}[Ո2@#&Hw }4~^ij2=Ah0>y*8A.{`E{Cn.^>mՄ#ȥlzGrK݌08g7.F;'$ҸI`Lq hJT-=EA)K+'Ql:n- !wZՒMfl{`$*Hr̪@B$dD|> eK>09!U HbT$G,9Ȏ?~ͦg! yil),z)pS7.gJr 0qE~if*ͶolC*0nUAk3?tZ !VRo.j B),}\kcf}h7VuXtTzUq˲bs-g1;H> )+J[?lh@ײap>K 0A\g}jmzw  u\2 w€޷~Q xY-NAQ.:qhV3?Ix;~p?ٓ1j |Dk¸ZQ!mpƍճܸ7(V+/8>1TrfQ([E0jKiKǤ{+J7=VՉڭLZ hTVgs&i(,Lz֛zCOƚ.5tk ]c#F ؚ`I9P*PD cTC8tn;$ya:Q%:J6Q ƣa-YRQJ&δ pٳ|Y\R)|SKSIk U^nĂ"-nS;JyM#"XEqVpp^i[XRNTIRN2 imM9UO8.ꈕϴL!~;vD AXzNBy< )JC.oݞ93#+trjw5)SdɧȒO-[[ɋ_έ﬋ \5SWZ᭯WCky[ H `UNdT9daF13qs' 6m)NLD2NI2ᐊ6ƨ QZ@#HDCU$,EWy+ R>$\w$E41!$3E(w*|lXq YuRzeCMAԄjU*dM5YJd)Y-&kJFVIFd--&ry,I'R4[̓AnbjYj]-JlSJc2,ܐTBUH7#bGXG&r4pe{+!1$iźeٮfuZRvfcVs0;b%t7yF1&%pBDBU'_]>VX?f,:u(;l8Bi@ i7CӡFiƠԵAx%\A`I0C|P`Le@*@x43 N-BGfbNA2RH A*<֑I}0Zb.P kY!:8(x1;_$uws5BUwFg.rXq0Ϊ>,&@(C(8O!aQ Lp0JVB0Zsl3>h#g3丂pvEj!zF=Mg@Eb/P52ĮSi-%E-t/Mݞ!<7RYAAc$(# BVhP(F]mzՉ*W-3`ClW:% ֵ ϕ00c'_ = 908C)  >=!0wTwT8wɩE \~u H'wSi(X"]+4ͫZֲc%܄7]qZ%L'=$l]OWEDg#lsծ[ME)#D EjBt((chO1%X?^÷y6_?LeYL 32,; ?=>ؐYb~ ◶oqpwߠsw{h<_%}qu337w΋ׯtfϐevwΛxoy0{&$<0ѫݗǟ.edҞ>ƹ9x^& w|n]\@dX5h 70W_a0wR$Ҙf|Lo8yCx)g\<8˾ o̧ɿH.׭%tNjſVS?;쟷㓰߾gGOgSpI`͟ 0{Nc`靆D} _X]q{Ey?oai fRTվ@\>n>L>3|s1OA*='m_ {Ӟ9?Hۅ!h;8ɩpxhpOr=w_ _Lf4.>ů0|2 E?:3q)sx~Nہ\.w`Fg7`hp /_A-~u` ,;on=?][oG+oH{ejі%E_CR^d"bÀ,RtS]C=_mZkkFnhV| u3-_7W·ehKp/9\ݐvюi3G>CETTHVTVAQV;ǟ?|ًY=xσF6$dK@aD㿦䜱CMy6kJ̓9Qt_k@_{_G׵*lL[_.\A KؓfPǬ[ (19Pl咱-dI/9\QKª %V;/ª .zu|. =ȫhRJ+)WyR͈2fHu5nDa;Ld^hNKpzB(99:=9m$ߵU:RB0cu%eBSɕܱal҂)AW4JH͆m ܐr5\ (W;jVÛIȔQ%9&P JYb6o }eo%o$̀lo6upíb{ІGN8O!cy"\g^kZǕIZ,=&J0jN<î'7؟C̟{k=}oG|OGNo_9kē[LZW_2_w>?o-]/ϭ %o~?Oo5k('l*xnHg@&b- GPd;ʹQAw.ĺᓱw0֦sW@橓okL9\x1K] eW_@\Ek6XN8s`Pp8rS^T&rɐ|r HӔ"pdQYO7V:ed&c>RYFY YӚd:W׳$|:Ně8 bQ@q w*'rjhs"w3'r;Xht -)U_U_<㖲.3RR=?1icWW֮F^/YpM(H$5؁(WgCZt[3Kq\ֈZ #.Ň. :o5*oyfffuRQR-Cׯ_b[}Vn2z2*uO99b;g*.L:JY9NmF 0sK9GuYb1"KBЖFH҂Qt)ri] :C¸(HDf5^r\Šh4FgzG9fH1fs>ڜW3c4&ȶq4KF4@IW, _ LVLpT &Md"&59qPh(U #C !-c^m5XX-I8ZkUVͨkɒqʥk Y`թѣdUNFZdUNFZ5K\Š*Um\6R"y UںYF^&y0X`:p8*Vi]WǔDnW.w28 dMY QlCk1J:Ȇ@іtq ޘtݞ\78>tO&C|V:ie$يSıH4n>a4?a8V?K>Τs:\Gl][WпegqtWc1ۉc~g"Mҽi=?J?}+imG .P:Hs(SE+-kjP)-:6-š:Jk[pl??v4֧k45@C9uimfZŀ-OHp.QdRxdXZ]#o9=JDwLxu7ro$O? ꩓&g<`p kH$IU29EUBP"I2y{3g"KbP8r`IdE}k*q7ia!mSQ"a;FW((;46l|>fvk\n8hʕ8hs}+.-WGHQ^zUw2dk淯ʼRKMNJXyâ]T* [UI "kb8QeMFImrEgBAjy# 8WV/ɹ4f:9{S"f8{z#FL/0d(!\}7epY?:΃OqHwW/Mzg,η]M 5?zm˕#o@GQCPW=ؗH:$"h [i |ac=9u>!6Ar&)8[v}f}az>d+]ۥܵ;U5Zj5(Rᰟ( ;|ER" 4 cHp6k`Es1.M=$sN]U:_ܡm)>fȮѾ4x3$U.R?'{S\]ڦI'],S!7 @ @.XAQ},>!tj*"в ⺌s{Ī:Aمثȭj]Vie.C'nʼn?ySMuůTC,e*#(r^o(W+nrBx)$wR-"Mߗtr|%3?zuCfSe،P\/WϿꂣO^c1{U~yͫί;|ya~;n/NW{,O/3mT>yهw 7_{F1 g'?kw}7ofn/3|J+[}^v*(響>Zո g)ByE(IP: e43e(WAS[ԭiS_Ɲ2竹qKM{ڪMV) XK eg=Z3Fb\.yq܀=]4$"lR vv*յ!DztfbUݲKuQK ]R ܡp Zej*Xΐ=4r9m N!+,'z)9C1*ɐdB5KPөQYmJXEv),ܕ\"%,R"%,R" LSƘzQ +C.BbRFE:pG^~Qoh͙ۣfi>4 ZFGp2H/NPs 'fגFR6nZy **hT **թBmn2Z(Bhc-Ԫ* % |j<ϵsؽ.GPp}Ntbʖ=ʹn:݂87!)ftjc43`frn)hV$B2†rc(>]a\~<ѧŇQ|ŇQ|ŇH$Aљd UPnD*)=DTҚ ɺqbH 7Dn p\g/57R%glҒ &(CĘ}hIPSO71y K wm)H]"q-H"qW'qghOȾTi\̏;*yi+!E^,uE-.[^/ $_.}#i%LjDrצTj oN- %#sg?RXO"r\JFTdt1$~7.c@]m-w$KWEE[$nSd ?ʛr5OT KTT#7*ppG4ywoO~y&whuKhpEpߜo@h D3Dv2RD!J0`c*ӛi"/y9WKP،TSkME}s~~-ҶH"m]'br'T v:?W2bVAm%r8JseCRPv 㗶 ڠ= i cPcMNi!ۢ,bfOPd,K:蜍R'ɂ<$O͜I0D%0l!7!lb"kݩ’"k-"kW'k'؎ڛ C۷J2cr"J*" X\-,H.QFG+j;ި:)^R0v-NAr._(//ʞvjZ\ѾnX .zF m.)OHqjHSųY)Xr `WG7`*B 6%2Z Qqo7ς+K?g{??7h!<5_Iu($w-QĭQУECfCl#Zus_}:k=\ WTJeJ(N*}3DR4䐺XS9ݍu k3Z\jq{hA3s@ChFTw] ?rۏ. D` 580!7}o[rZ|++'qDC*N;tJ(NGItGb\0O& 矎QM!2|r\YԈ?_wY[ᠳ`TL ' Evj. *eacKb 59Q/R<,.k'zvf60"Ҹu %S"zU{2N\K Q|]V\blSys[„8Z?m%4b0N:ZKARt^ygdvdpP'{-w_&WNQZF,wh *߃zU;H3B6z5AA\/fu`V'{tpXD0<|fhDYh9Z !G'|\ƚ/KPz/NB);|ooBݎ.<@Y 19wy½P[ G(Cviw@^EWׁ3^%,cIq7F ,w<( yZG2DuNSQ&$(G7$u1q #8Pǁ:qT@YXCG{,@FEwD8(GHCxTH&Ã@*(6c =0''"#8.P+$ hݛW/~z2}"TVȞ_iK_vhXffAD ɸ2u &wsȍ$YܟR$[IT] :ٓ)J6Wi:IPk d&sS! ^$*G&9avL6 OFTx1k3]x}sk[5yS{Eҭ`qr*#wr 4B* j nׂbuFJZۜx`^{T?u.`^[_02DY=25d1_|Β%'"#%1:$55w[@J}WI\^0Nƃ"}(9Uw_XeeUdK_ 0Vd z2F}>+UkDYNP[?VB5?NJsє}8@W(|` G$FOe!ZRF%]|'VbTlLoj^֨vS_MbF6m|;ȍsViE]oF[%7z~y;.3YaVQOi|{BY2lO\=dr8zoN+FW Vo-aa:LJ-~ֵi\*ը6Ion1Rݺ;p3R `mINV߫/1ި>5%,TWe8Z G¹uQLt@n#.)hza1I^NOt违 PN챖6hJV~d6".%S?u .%SpZ_i,XW Rٱ_lJ9%B@@f&d0ٴ n1TG\ZSŌz UWMMJnli6`EAEM)U>1/R*GHzGYwyR"\]ȕ[x3/ usIoh7B.W/~{Lq/dޭ]!9vMw?f+vWni\Cng* q~GGտ<+1%9Bӯ?i@ɻMK7ޢC,}|ލ41l^*,I**,IFJd8B9Rw z, hC?}S)&RWBMT+urSjzƿ-uYzƧqkFGqt;'ߟ2TPow8%Fp#P󮄺Z0V  /av 4;;O)nFqV9f}䲃G0[ N)671L@AQ!99n:&@PCjF\0=/2yy## Iptlq>RVjC }Cѯm6uxU d@N? I%IE>$51,&" ZS;5>e~%ށgKx8CB6$p< q'rF&o22hZP.d!`ͽ >N~:؏V<;ӱtc?c?ƫJZrONRlV4>$Lɜ4c@KGf#$cOmnu XOBl x>kW[?r?pp6HJ *)uZ'2z &D09/4j9];QJF7uφِn\LW&tķBx?5L}XJϱv~5kç$,xN-Ԭc&)>QhQ -O>qQe)g# hD GKV:7J<5ϗ(YNҲg$\%Lȇ`L7b\Z \yXU`!lh>gQI&hw8ii-$M.6Ƣ\ uw\D"ld~G\D[oݢ[W-MA}'*kKj)sKcW5WP8J5` D@w8Q!6UJ@=kN-4kEٍmNdM$_}K@]Q6ϮrnbmO}S#6;,10ʦ2N޿Ή%Qϑɲ׋W۴W.C12t#'G?HW~AdzBn)t[ ]N4FC3eO-jh OR.QhYD4J,yTw]GtqL&1ݶ/DPo'R/G߸zgo^ZAlխS轱rn9-[NN_+tS@߁KMIO@J<òS:'ZYGۗYG:\TuQZјU ___}@G23p3LC:*"XI0,"Gidʍ/1?^. .! 28gNj*s4{FI5g> CA{H 2Ɖ1+ʻŸ4|:W&MVL:S EyN 0K=O\eNro^m}_; #\ieO.r}yXy'=N">31% qonb7RI4<,fkiuV@cBm$NNP9`0`iؼݬ$MXG7请~o9O/GsVc6qOP@ҜI@h/p@`;<2ƘMO m=85T4P"j[@+IŨMXM&koUS9g4݌Бs&G51O B嶉)3rT¬AQ*V JETcZ UyYE-5qYiRɴmLِn(n"h&^΃F- &-GT砹=r'/ 9˲*:JiCɸ2ux6n1(o#o) _>㣣c) bjVZqИVbQhC0RaGSVTAhd%CLdpW4>e7@EA<-v`LvRF~TڍOE NBd ʐ] ly1*' ($4$E03quHKU#-)TbK`AxC`eJ!`b$EY3<w3`0X.Br0&Ql'i)vϳzaN'K줆z}P LM%*8˂1KAyۡ1~ n6+fx!"1]?MìE依4& d0Fg$y $5U6gRSȓ;k;$/ʙ(xxV$&U2@V78/B|+:D Iǒf KrԼJr,)<Ȳ!;)[M9j?E &ZJyt]O Km_g<g{g41hNApFQgGa)c"s8kضE1,Oe)fٶ%۶ s^ 7A^Ln)ȋxbanG&aSwځ1bgHՐ1`Sd%AݻO׿ ⽻~tXk[sWEV..7ɾ<()k'Ymh8y.tUu]5A5}ʻei~U˟Y:uP07JdI9HpR00%AoztvM oNB Y#u~2_ 5=IQu!Hq\}&km=ԍnkx/ryA/9Fh}5!%RkSjEj#`۶'uxǝ?>(Wz/ﵺqMщL=VV6-vy0ENlEYߗ:sZs}̚0r{vr藯G,Sr|8nU#Ovh<8?<<|͉߫)G}mkq"n^N%#5>W1;1 {oi9_nƌu@Q>Ѣe ZD^FW)ZvA'Ň6w9 +.NmZF]dCLL$*8޳8D%?SbZh?d co2Eb6 3C dbdq= /`_ zs?T"Вmej@C\}yeLF8֬Z_}W~Xh4l'ct_?.\4dpÞYjF6>GkHx"֛\@ǚ#j`@{z`|'?edr>&4L {wd9dɰdBt񺤷ׯHxTy"oէ7ݡ?] yçenn]jw??4-pQ볯Ϟ=Gc޴>8W>_gOR'š׵9%{Vi gaxܰIS&:l2Fh`Dhqԥ cJPkĶVLXwhm}A}mڒvg:kWs@*QA (SiA`m' 5R3ZnA, Vi^E*5O ԾZ6r4uF{V[Ҭ89gNZsɋt|2ɿUK\He({O_ܫl¼]rhG=Yg/D٨$P=ŶT2Ae]k7tU)UZoێY#ȴ.,s@&٘ZGMm05$ 4LEىW#$ʠrbAVڔR)-J$i dNR÷WYwYL vA0Te -|2K5NMSA-šlT\f>T ޔc~ 7ge+Data;tw63&肮:l$zqgɈ~dI7QmLo$-d@[;N>b]2j=n| R eOA 7Ϩfxb,-TC|kVe gpSZ?Hˆi4g":PAy&L& L&⮓ & ᗇ6pN7S4hN$-?QtxxT\}G G7:,sj-dXH2-1G,e ;,䌋UќqOf v*U2aWqƨ6GorڮeRyD$LC Q1Mnƌ30?AC~{{W{ c- Sڨa:F\%\t/g_Q"?O:7w=k=p3gg/^<)yX@Hk\k"B٧@!@&kkԆtsAmT~6>?ڨYcءo> "aKX\S۰JZZxRafxΈ!-x*]L]m\cO'ibL+ umcF]wv;#)f{ e?PR3Kӡ{zQ 3+5*;S-l7-4ifUy=lZ]WИ$ǥF9DҰMeFM{xtnHἬ _k<#dt},)GOM=d[޳vdޒ"vlz%ɶ<~dza'In%BM-?__<8ns>H`YOjV<yYYZ*V|ib A~lIe.BǨ:y)#X&Wv aKvqgN֐M%M ?{°~oG5b06ZoF֐Xh!v;n 4[l|sϗpИQV7=Ndp2qd=l=lfmiu$)V-G/VlV٢2݊*>E8BtSMy0VN"ďsU@ 7|6VKmNq6l66#B۶%_Zyk<$O,nqGv+Jd)wޘqt±v$1i69u-w-['ҶDbŔ,$xܛ6 &oswfD]I C2oeOO(ߙ:-@ ~LqdnZ"=N|G&N Dn+{ŋs_ &OoDݾ`?_?Mvr괹ɳwF9`'a/%2O._efLKg.ecwYcښ6_QekR/ClCv\MeR*a+&ELo7II DH9}.;5 jy-^ܘ\H V\\sƿZƀYb럒\FqIuk[?huNޅVDlZWK*I <:y!}ȷ6aUd0_0!##bMCş}AO =b$׫724V-l=z0jC[E6p?]_~>{W+ϝ%}`YjbW2wӭx\S>,=gf\Zgq_HDh-N;6q=ښW nyc\wu'p}OiL _-؈@R]>TgX]ϴO^LFc^Fs?c#{WzaV&7dڽzm%q,̽4利|}awmȿ7VȗhԚړu>nȯqREbt"QKug}YNhO>hvk`|F) y0A҉G֥ZE?hvkb|^8dD[o7M,sn|~Z,>vQԨ|ɗ7f{R l$uqLWrpq:Oh~n *[C^gI:EnPǫ$Q кFTDcMɬ<d!g.u=l^(/m.8?_ЗNdtS9uPw}ZxQ# ֪&‚cKt< 9VT0ҁ"X%ۍ; 1^ 1KlZ55s! ,W%ސSP$7Gɤ!X2iưd6փqԿt +ӡ30S?= O =] S 9a6O0íaHe+$ٽ|n>:h(gNh܎Q7g=agѷzP/# Axs{uJpnz-7œ0<*>R[ 9VM!D[TXjJB,TePP.A!"R$ C/eNRS@#.KQׇ(55еΣY/\˚1?%L >Hbnj pF0BjsDI S{v_j Qb/{:)-mS^JKy ZS4! sA҉G֝a85?9P5BFDcv-\{JG롒[#7BFDT`cb®)N?v&%$pu9+T$X1ԍ,zrN?eBZ:nʥY}X硺sMLvWWVK W)ϳ7 RiIyo{b@15J"+&;Ny9X: 0!JI9-8vcVBH]$cTc};$P"A:JaKý/lZ}lːEULuI$YĬ,#!R CM ZKW0h ϢVqv?ys" F߯Sf RRB2ݗqg^z|b}TcߌϣzNwtSm"0~[vneƟm_}?Y3_Yu"RGpHowMKFb )?GjK-phaZ&ƗCQ Ƽ*HiDߔʵ(J,))o{J^L;J <+b/LV^HY?tF枦7 s MZtyrѩ,xĮm,^vs7ɿ7MQlEn6ynZ>F8D̟ZRFVK}ˏQFj=`iݝ.9͊k2V 'ݛNf5)\Gk1˻_# ?J.:B ޒ>)pVIʤUO#O4"chlx4)ĉ'96 &n0*dG'wbrD *."9Fs!$ R@*\@rTDjbm*y Y l&zD^B AODR` E 4@!"!c%"PH @:g%!0¥@5fEjI1<9ONNYjd\?FHlS~BrHQ60D\߇6  PLSRL}$=&VgM֓%a(m5'a{)w8T+ E-:'m %9&? uJ!݄:Dž`P4CP4Qbr He%f_~MͣD9j1Ȱ/[GXA% [5M뭟Rq#p fBX9s@5 2CMA ;ca0{Yʪ_6Kw 2 )An2"q%Չk U6(#w h_3 ql ܫ,Q=fVLA\pԯrqD4Bv <}4H){U&0YO) Lhrp'iB&:&2 m D<n71y [u˜2ڏ¤1OT0ޏɰD7eau_>=.FNs!¢# sD=%RhEC+!oY #y>+ &yi/fά~*uзBRbP@PPPXiT\hdfPI%"FIO[~:DE7Cx2)t3c0H?'1cP)W 묣w!;b4Cn$p*ɒ=ɱq" ĄP"15!`@ !͡*׌f0ItF1GV jLjbY!w8cA4- 70wp paBhN=KP/ND6 p3h 6gR:$2@@C':y|:dCz.;e Ϝ|Q%3J՞h%o&{ܿQŢfd'KmU)LQ6P/R[OGFNf5HGSZNDRGD Lo|#b( eC\cQ R`ͭL!);ߍ伈K>ٻ޶q-W`1{;Rlv; y`1SHjmٵt2AJ$v#W#;ƽXE}<yHIVGlhp+q+K#aP n',V`@D]kk}ÊTM*"nE1u'QjtM2ZW Bܑ(uĖ"yk@9'60?*b_y:RũҊ:CZu;J/;K B Gvr "쫽ω~`4KiuP//jr.65C7c瀯 Eߛn6|jLj9 ,Y[Ym;Uc&ߏbBc2f6Ț(ܕɰ$ǚϚޓk8F7yNHEInyk ~X4%wFQikvP\*s 4hs3Ћy2O7-#Ө܌RGۆko e>ɧf MdZeFX Vqe֖!XoUXjߚ`|9M쵰J Ts[BiH)VQdvƄ7y꽹y 莭_C6n_*s -$+\0%F}n|de6iU}>O!5xT{+=收c~KT}#'z9kFD ߬B(XaLKgL{G6 a@8CM{n,O q6 :jJX亵JЈt% bEB04 XbF򞢤*VQR޴Pqי["mBv UjMn`˒S_I+6HDf[p|=Z"S&CCsA"7$lC >U~XޡD%(ckYTK"/0[ޠ"zkoInhU%.ctmE"lk[2%BGGW#-dӁ :܊(LZ"nKbwy#rB]&<bCoh~h B~6⊵~.mG__HF=et?rqJ~ry鐝ޓƻHKt@Z"m+I۱{&7~Y =tn+$UWIy,΋ƒ69>DbpI(h"Y )J,~ BmXŽolYtfMA4\ :rqAگ!4 nIM Y6+-Aau{{ -/:cͭ1nȽ|s%Uqo%#F£Z)`)<''k:$w Ej<&"!ހN׌⦩**`MfЈؤFnQSonص*%c%]]M:!y+&gu[Ԇul2~uuU,G ;+wrí؝ N#R} V= ۰8bc֨ *A!ԭ8Z3VQpve9fˏįT3.oQv;HW ?.sgb<Xf>CHˇ,MVMY鸅rn!~Γȭ·ޠ-/ ]x(=yr!Sй= Ƅpr۞A jw{50atn?b}/쒕HyG]Q\}0E>- z\G. Y=n&p0[8Me+֐"$&cJrLI?גS!!,D]"p4W0#ιHcLY ʩ@ꠃr1Q[*}oWq 42XhcEۣ.:VEv~Uժ㫝!5QUM.XI  $ Äga;`uۣPU7uv@-et34|(#7B7 &u.jmͥ% pj ("Ò/ԓ؀kzn:=>N"cL 7%L$tKƳ{鑛#,g~x~2o5au|BP؀ZZ3 I9,~AhBEp *[>A<e<9 dx˼t.a)EwyOڷ.&0ͯ߹"Jɫ4ng'β#P÷]dYWn|w~ֿ}>KnM&|Py.KCB7.}9E>A;PoE6=s[mH}ŧ<<'4 /@{3- *`#ddQUfꥒLur&YZ-幙>;'۴: +B~=| 7S ^]~'Inz.կn:JҼ@20֝?]?󦥰K>*e Rz^tnݸ|n&& S3  ^~r& Sg ^NҳL4Ymg7#F /x< ''3\ؑgˢpc3\`'yr@1_~%+ ŐcB׋-:.6gkO4}zt4޼ny__nL݅.=._^[lR]YUn|?7_9?dN/ KҜΛt.gB0 \ */*& )00d$ Hi"D=YT~ ~m5o*hS &Ui zkrF_. i{٧d29a\E =[I_9/lp `KF<1xKU4b${ #0{-"A 5R8\}a AV\q_5=!݀wO{)`xUr)J]GRܦz{ ]VVu'H<" a aqcD4]UyMݫhr x_݄+ i _=LҊ"#Aw7n`dhhw(=уoՃU8"׫*of%ҫ+%5XAVL2 95׍ج*(7M!wã_IO7 nQ້ 55Ǵj^NgIZ\:J ˎ)inWjtU m 3RkSo\F4QNn$Рσyb]y}81<)DIb<(kNxH.pgegRQVK^QzPܘgVoJ0^QF!W-ZkTԊ$8s$^PK[6*u7ܸH\scI1ŢqnQZuwͿ(_(]gX[IKG78r̝U!di(+^=Tw^ͱcv>!FZʭjPG\˟K!:q&OPTlp4` tT!bFKQu0(VQZ2:"qSDP}}y }Edd:CK/f ^>s0ECA#x'M1ЃaPNe"}Fb-y%!pM{)kX;8w6u %z(MS6{gy e[[WsVLw8tR , \w8\ *2(dX`Pw:u.w8#7Л'jAXfs|5dtcm'o*h@9'h60`'\b`BhH]*DU1ylb";HñZcl(X̪n{0Lfuw ưPZ$l(0$ˈJm"6 R?{Wƍ/ۅdpn`\>Mƃe߯(i-5Vwk A2H꧊ƪ"om V 4dk2Q%R'dXT<$涼qWE*"!3`P2,[*dy+FY,sޠx$8Bلbz3я(z%O1-~ĴGav_ۋleޥbb.bb y?~X5]ݏץ|HꩀM\Z5'? 'JKMehk鞾= $P5?JM^HIYnW U^i"RHeÙ}iQֱhp Iǒ-Y rPrjts $tDTE_7`텮Yt匰A>jB*tO6,D8CC/eYǸ6iȮ88#̀۞Ųi0u}|m𦼽'}ňo,\]G* ޮk~v&Ͻ|x_ޖ1sBYŒI+CpAɋ 傾EiC$YB#YLguv>ÀBVc&Mػ!Zg >W7C}~ЧvĭB#8A9RSa"cHQD``J{de2j;D!G[j~~2-eZ˴_Z"MCLcqǢOB(Uh?B}nK2H' /^.\M'r&ΧּcA߼-QkzwajFj9̈́ 5ðPe^j }:*ueqx`j9f6LKξv?3*8? g(>߰Iژ*&b'[z/+%vB)׶ӂS%)DIQ=]:ЇjrprpPH ^(9؁LZt(0 nbyΫxp ;;|贃{7mPݲC{;Bd՞$VfgP'c3M2"(Ȫ䣳\f9BU^L?Դ2: 4oeD,lZIk8Ijf 1~оcg,֡A"IzpH#|cb4p>I4b)m e H݇L:zƌ>LYG(DH^=.r-f&緬8n郘56oqM4zF%ЅNck w_FαFAr,fv0 8/+oWFAg=n>}iQԬ ۵˪FfQ܀,EhDLѥ:Pڊ@]FIڔ(TctQ.Ҧ50B6 (Sc.mKt͚b56Ͷ6;+}PÆGm\2YG1d俨r9 Tq ceTk"96RwlI&+t|d|!q(EA`XlO9q,R#*JgcK0qŘ#W -1) -Ph@yuʡ)T+V 'q_mZT p[Zε87CcpʭN(6Rh}#vкA`l$φ>5L`q/o\88\bu<{bzѫ K$X0Y OB 7F# lP0MɆFdDY<8Qv{R ө h@aQJdEȑb45YQpdlEmX$îg]Ԇ)O'XU{ƙBK Yg-ȳ|O|.t7_v|)듗PZG~{GH_xxik9;*Q,;*GF=Hgm9_\2y_yLݧ /۞;/Z&ݵ=Boz"MIi 7v(žAT/mn%W8eӁ_.!6Te^0e>'c٭MaUqfŌ5[bkй2C1cszcAXg(D:gs3gp0P`q}쟕‰ݠgiA:ԣA7 VLC%ǡiQu%~>x0oEΒ(izXVZO9e—0VO ?067qYb=Ȓuzj|;pŊjiA !۳9P Etر;Ηf,xypMp.oR5~X\}|yH>~g)~KjӐL1C0ACV3gT);Z/fm֌Ct<}u\q<0Ns[ju: zZFTRO_i2"DRNgTwݙp:2Rq(3^>lQHE_y֍+n<v¿K=X~w)|Wj248ޯi7&Xk&u{w!)#Nր: ʎ҂nܫߖ4X%SEsvpg>TxݳI+mQ& J1RӀbଭ{Kbҟ:ܢ]NiVBNU28 ΨiibWh<-bZ tbw g)Ԋ'ϊ5[ي%u+/Kn%7gK ;N5iqe ttWuQp*_'X#r춵+|x"5D_MU=+MzyɯJޕRUy|ZQPcTo *Y8iЩ'0Vʓ?s2J8/42$S?_=; ]rRN=NLJaV*.cz4y,ݿO/)E9(S v,wŐ9RY|4ʸeL(ou\$…}>,'n6jmM>,q%?O8IZ<Ϝ?C?_?N.!F9yB/(ˇ$w.$ )RLPeZUi{rX!V 2趖%}!Rb9#ojޞ\ VGRu~TZ6NMRʧ,Mљ3#_ T Eʭ9:FO(SSZzA1mMq1Ũk'Kx !Xhc9X*'R(J3<(f= !gujپL{Kz`[LJn[s-DTH1:W2z*X3!$HYnѵA}^xci3E~y2.1)hQ h,uG o]ZAo8!9Az=#ǽ9?­BĎ+޾|u[{tDwiRkwZ^b*ШmLkcâ5)*hW .,zRNcY4zxuONbaC?,yvVҚb"oṫxy7Gtrt:ǚw"S(|_cXv7Cv#tl9pk+  ;M0 XL>m7x>JzY' @=f9 x p%AUQAM,”8TMAf8l]^o7N;aƭo+0pVT†O)^ Lot#^+UOl#^]4d+w<axh C6힗iy̶hyz7T$bVe]b("#3#@_&Oi5W_Ř`Ih A;}I mIJ8[̕{A)31e藜G)pfk2xg^3_!:Ns#yï9m u}Lu<ׇUV/_4۰̅f~ؖaL?[v|Rz`z}C$h1>XlJwj6/)QuF,g[x0.)#:O >v=yuZZnͼf x#"TB@ }g;a#e `\#OtkAKyV-WU9\>G۾:Cu]H jUL)>Yn_Pd DargL(EmA6C)CREq`c `(^z$~t$\XMwWvE_&˯z|/ Aޜ?̞_}"ɝ߽kDV=1'A脑1AkC [`!Hf(2 2xOtjT[Z .: TgKd\W/=P?@L{7f8w_pPnWxdHirZ7`⭕:*D"--@i%W(@YAnu܆;bgK$E/*o'uӾ?ހ^.Iru^9Xy>_>.o!xoUr/RTE"4_A娾ņ?{;]fPITb['ff1n$k2=|#YJy_Kwlsg³#h.XLɔe"4r ct7jTnfư.QxNje!$&uApu*[٘by&NѢՆ9$p1uq%uHyКsıfڱQK f[IҠ8Y5Һ+sN"}~ܵ۝Tym-X?-yUFhrwe\ђ+q_"@Ba›Mf-[ .RX5r= pr4w J2=-zRCWM>g(]c6}@>{DP*Uc./?e MT!aS 7e@SCQ =/hX TbQ`$)9N1 nKQ+dv b s䐔. @fiߗQ.†4OuxʐU| imsLKJyNugu5IG| KgQ>xKV*EO;FL"L:1|/]lQS_FTWŔu^5NqBiK%+sn~ I%]PݜhXiZGUn- ZBrE#G|A#(t/d!R\D1 tFb1Tbj ql(&wl~"%oJo߹Ҙa<_W75*dX)3OCf* %þ]Yc};̭<ǭւ`Q' VQT`Z2e)ShnbZ^=+Wߡ J/g߁wu! ?Wxk?N]XxqD`(b҉)MgUD /ǹQcLP!Wf*[<%t9P8NPLFjԞͻSvSw36z-4 SV!bAwpȂe R YpMeOIAwR]`+ХKNj @PF$qHqy-r Dc"`(·L=]3B#G K%"pĘ"PǂKW ʿ@%OAF1B6O3ZCx|# ~Ee ]̅o/BX$P1 EL.  10CO88Hy,<bbBP1-(F&>yg,;sCNZOEe >qO'`d ؅ 7}\ JyK;sm+O<&-kK*nÑ]Rn}!N~iQnP$U4E'܊0Vn~jmQj^G9u+rfG`R"-Dl.{2( !@ ~X@mcDI0b@i[n:&7e 4@+u׳pBo ǜvMIҁU]R+Y @xAX֦H[ӖI٦OM9vd)c g ӂǏd7H6LڊIKrZH \ն W6T;\d7I~"ۊ`MZg+ۖ'c“aRtBo'U-qPD欂!eADOG^(UK:ೢ%!=B+K4HzP>89!1DY>s]y헛뺗?/%K3=%(`Hzϓ,%bA#U&U ƛ5;P?Q}ê=z b&[(O0no7u߾.|OwXTmֈ'+nnv8D3di?+<ٜs»h#r_γ{cB[f*ΌIkkuKΦ͋Asa%gDQN3>g?lp:g:]_~GgH!:Iwlmj ϤLNB3BzgFqi HRJ|:7Ϫo#mצ2XMa աRҩZa 0)gJ).,BH$S۸wA l CEvղۯbdgĢ NNcR+i &J#jJᵮ$=-Ugނ&O>qp?wڂ3S ~ռ_Zntcbu fNO>}s5 U_^n!qE_/O&q;N1!d5Ԕʏ?=(ڧ'c[/Vv{eDGD3Sd<Øm`1Ѐf0`jֽVM`lLW +{i_nźcwmb1rQGEy^{-~foخlfAYl LXmRz}^NOTzԾԀ:\9UJĕLOUq2!-iLla B (-ծ0IEt'w,J)q3# F =wDF(耔{$ \/d‡cԄ|f,>Q* hx6o̹aި8pZ֎FUTHdpuǹ$|:2{ 'Ƙ(sT:lHd0p!0 @d\8?k'[1גwqҮb ,o`هrTSbQDRf)pLtZFKA H t59 Z?X9D?$5TpձoHkbmIlOnO3CTnh[t(L-|wZJ i#.0QE7Xk 7q^L2NhM-}k,7?ld{%]8%&krb;,4啶TzU -+)a?4A!ԤUD$2j!g.h<?m;c cTN7 9)R)1/ys>9@X>#=|=d6_,tg/ R|>>Ԉq$%VHfaYl]?#N5y<##%x8Xt c{27B9\<Og@&fdASeA"D$al'؃81 LByjf6֐8ĩ2- 8Ԑ8)!B @:'4FBU+)`%nޕ5#y5ReJ`R{DF8V9>>@:[|(R(|饫$H +`4?pkaApC߼XJ/SuR[HyruܲgA{[֙rax$U;QӦ&hƆZ,P{=j۲-8}$slY 8kW_ PNߞt~}[PO^TL$F-VVaMou5XɖV~K-zWbz1['+8#*(pIeA,W$&C{<|O ^bq~@b" . f%.ZȁyҀK:އx\|\O>/nN޿~P ΓBd>.+A%iz: ߏޛ(xiA j9վ,W9]OShɼ7)j/ۚ^5n4_$Qm7D|(Dx7ٍmuh#͍#;U/"?XcE/ ѱ` G=k渭dav3}mKZoaZPrAvvr-h_{ݵB C(CigX_Qm/(ޟfV b5ddeK#hJIL3 TA@4'c0]>/.7_+֠d ʼnZD+:&z  d[$_V.T?FnVXc,;420f};% /;Z}nEuX(=Êq9RquӂmLl͵lPJQR P+~;K Aa(˒m1Վ!UϨadcN8Z$mNx*y%٫#|5>=Ujܿw1ڼŘ Fi3<ٺ/(r 8+`2"uu c(]J]e}}\.,ZS u r.οX́d Nu3>`C!hdw2MQbZWz,*Sގ"1a &{/ ;> CmW3>S:{sJKE;z3 LqNmԈ=su-R.oBNv_{y]峴@atM;tWes祠s}vo`8 kIn_VK6R^:nfNLNv؁j}aRΒ^#:[45f;LxΒB,t> uZP#%ly'hD.|;tYR5QPV0T}6 Z ,A۪1ғڝt={rH-B]VluvM9̗g W bf֜C1XϳKPAl`}?,t1L}mdYuO[i4C*wK@F51XA!(m hC"nK~L\QطfI?I _/lE5hZW6SsY}C(!ʔbpmlt1$H[РIӓ~-l/BZo[vSqj=N-Ֆp^|)LQyd^(#GA p+ Ia[z%GlA8f-N >NbA; @nQ9Xʤד`OLk*;KZ7 {lms=ҹ 2q`.3W/3X[O0mET {+9!B(7EE"if>[NHNDÈ&J D-xe4w3_};<ʺaaRx[?ѰY+x+^6&$rDM"fA( ` (Axؑ`;fvR@wd52 M'o LW|nmbQ-52r(gBB !AD!{__y3`6_,1|2==«(ϜeL%_ND JL ]##1.[>P蓥rwm=nJ48{V< &`ɼ.I;vd[nٖlJ$%9񹟄_* 90fD9M%Q9PAhS!DJ6E:q :9x*k yayPu,M6ya9@E p$Xьsi?6`)5cELyͿoBHO;W|S*^I,u8NZB1C5r1,K asphk@psi!$d!!a!7NiVY+MXEL3j4hfT*0L$q(,]Ǩ!2O򬌠R jΫ dEA ԐFZH: U[bPq&8̈́@PkRMt@kѐj;uhru,'-ylaB< YXPB僛fb,"xT c2qL"Ik{F7FL.{6,t jTz3xyC.G{67QFp !g2ٳ^SN-`UJ]. "BUcёJNzEwبf>Xҙ`1?ԺkNJ@;ܫ;c>z>zfs`>d+a܆ hU4Z_͝{;'1^yd9H!P;O~Y}" ?>|ͫje\lOB _O׷_7߮ǭ#>H*pl|;38-*A9*bTb>*4N8X};Vw)=b]~^:n}N z orN;l;gdU{][ חe vAcSuyA2bك)?kキϷ5]?HdА)8UU1w7(zcv~zk2Dc;a1 ]8rID'= {ۊ`!!DW;;3zu=XOFzUr8 FEs#JpN:EDk%$1Awjw(xSV8Jd}[ٔx|Z(ngeZ+^cV))!ޫZW֤ aIGjE:Ao=Ő;{u{SS^ gӮY!`LOw^Uz&'/Qt_ GsCN^-X ~lTWd|.7+A]~^ɹ ?ɵa[OfHL9\GHsZ4r_yo!woY9c, v#m*+GѳCq-MBip١6,eJlN_S]e_pNGkQE0/gG=蟣㧱KvMk^I7^9Z; qNtyP tip tyh@xvjmtj>j?b* |Jݛ~6_;殟4F7Ƭ6'<>\'ϳS@<,cq杙G,.7E oj,=ɿk8F JzO1j1/264v1+ ^˚v(4mg1!QH IgTHN%;l^Q:awxVX9!tkǶ9!ttBXs JU;6Qg/geҬYCF 흃548FRƩ0O)@.a [pesV=N9,tk|aQP#GfNqn07 (Rݓѽoz2(X |8st=wKXO%99r5w=[OF(jG " ޲-O!75E< Ihsv1 H ~v-*XBF'}j1*$_s T_#^ ^k*=LJ4-Q6f)AY@jS8DD9˰΋[V$Z7ڠvW^Swc+6<2L#tYS(V*dQST*0L$n`~h-JS',HfTڬU+۔3f\˷)N1+BqIjc1I:Sdo9Z\rs=MZ,P\߼~[,a)YOW_E% ҸHW8bRȴP)aF7J͍꙲Q1"5!E c ,8׹e %?]ND9?g,< - Q-Bdsæ@T"Ny+P =:IИ8Y҄9h p*2*%EQ& P*8Ue V:D OMZHfX/9W~bȹ<lbL 뛯yzx,+%ubuwT>v\rcS}Nog5!tBRS$_}u≢ @j.y|j) \z0`BG *V7_;4'5>2} )F r* |Û<bUL?iS: *}/b7Zi+82D :4ɳ_p*! &ya!|fEA=%'\TҬSH-͜"Ǚ4pt!鄦'A B>1}=Mgrsg'Ngo3d I$xzDMgA?oVTWv+ejݛ6{r8|{OQx2Ad8v[;k2m `uiB-@ :UA hFƒW`-,8˷.3ٙޭ/+n݆ܻmD¸vcMQw6a I4`NmXq>}H:=8C~S94֍ݔ뺅r]MuD+]qR~D.s5wf֓ GlTВ4fhM)$!#ci:*PJ9cqϹ]i];TGǍFzFGGp`IUw64%*((+SelZZ j8-2Z$g) KMaF\ SqsޫZ\M RW{$03jFm\Σ۸LPz4 Ҝ:B\s8FlBI~3Ņ1^ș}&azK K )k2BPL˞"aJuB d8OUƘ>]'\JO(vPs{4b䫳?>}Ogs?<ۏtZwjpK ̎jӯ:}?TnLF)?Uxmv~[|y{庢MuJy$O$D0:&LSVd`󍗺cHV-\>i38Q%ܽek?3keҳg?V<)'區8R+9\P1B!I4U͓,ri"AW.8iÉW߀l'qω W3ėo90b#qbѳsb#(hmSfnS$>s\,ɗ;6>Έ^#d+ɫ>T1]WR@SZ|O>RW!+5xJ雷ۛ%>iSJciZh:FQ\‹FTT_ENkQA֨9.AR9hFq4crMU:PDx5T\.Z8KuϏǕK>i.?v:)ozc zR3ΖteCϗ#Lyvֱ֍)GZ/_$Yw \Pd9F.Q=^)Vp;ӞR'o*;m.H9g;ʎăR@AFRFׂ.q| gz*~!8f/JQ&#W_27^*_"#k7B  KZnz/Y nXɾK~LX4O81AbC3ѕ`a`xp+T- tqagD?ZhrGs 9-D>@8gyWRa>E8E:}NA 8>UbA8PAސYe5 TB KJtQЄX<׌zUd*ZeZ-i5XE-pѣOsq~x @ovNjQOf|4`o!_wS+Ӈ(W? &CN7|c䌐(\+y_*9 rUch5-^9wB`Pn?S+q_'.Nu(*cx_3hc(ƁQx+-&Hs]Ȧ á.].Ĝa4#|kGHڒ@% 3?̶]ӗ_fw|mӺp*NB=iNv_]$/ 1)\yu!dRǤsm⯛h`fnk"Sx͗J-RvEwc5uH%5\96 ),$'+G3B-U\-zǾ0ē[7:/ItgP[ 0z`"ׂ$|QB`IM Yaʠ;&FU fTbg\d5Kġ"BDb+#GHi-q! C6T4tV??*:44c~[BSK0Ƶ/*Uo-Afp\%G2JF YZZvL<'?bW^Q5* a*ē?O)hٓ P@+q _}_Lb9|͜Eno9Jars x( M3&ڷlp_.YJBh9ꗵx3,u.Afګ'N;$ Y轑gdxeZR'9 ;qvW zŵi=kn߸U2jA;cP KK?t 3v[1AV J:}r7jdNjjА1JwNa+<  ("a,+hI px ɜyJ9jFyF }ی[YM]4 x;T$kXS+YS%y]AvгͼZ~lwfzkD$/ .+4jxjBiUJmqgp.awvÚ~\u׃c@c:rW94i6%mRs8XnT>}@ĎF5;+,o;U;zy|ztKA-z?ϏB#b,{63~ >[UcY-R 0{sFyH 4;_3Tcaav?yb<8ڧ~~ )l_0Ežx^>D"swuf8k.cX074]QR<Ď$$_oOKx%j= 9cP!v⣻O@yzt:˕=EAVTIt`$&ɴoe, iô$ͬOtƍ;1X*4T;cXFqECdFB,TRTLx%J4sb1q!5$ cR n'-%Yw*A U3uyu|YB-!䩌?UrK Zi 0ҒPe$IdQ*0ձ*1+hCZs7U2{1ro=iݲԸނ^_>}A2$!_W?<\?5HGoap0(k _ ތɃ?-γ?{:&8.EMpz;v'pm XIſ0}kOZ0,W=!Xa5|dA30bNKP"JO5 ǣ_X5}Ek_Ip;[{bT18(2wdSwe#u4ߺޝR]i*T w Aθjdc) :C9EVF:!A IZ GUV2[ݗ%:&>ޯ[ٛg{,E9 R1,0!V:lT pQ5h'q$4.VnUtቊחMhcndT$ Fq@"l^!4% BVFR$-*G*E2iAlp u΄ 0a 9! <% 0 Q 3 Flhe#jV6EN_u&b7/Pa0PG C@3P7(RT"nQj0PSko7Gs # F Dq|q%F\ʝ9J%eZ`np$(v80QrMT}_wEOHri\Wd #c7b^Ď~#,1VlhZI@d7Mݸ '=gi'k w1h+|]ߕ]U>7yB4HzA'筷.SsuMq>s[Yh[Iu 0nܟt!^tEW>^tEWx AUWOգϣCZ$Sq>?)68KST5!Y&cQk &VY3NbD0 6KS:Cc'> Is^TJ;[fP`XffXyb\s J8}|>D(8|?)$>qCGO-Lj8f#xq bo^v& [6dlP< _ o.Nwf$a{W'y,E to i[~1|HB0CybpD(;t{Jjwkߟ > OCxSL_9}ս__ _._~q߽ivi>& Ϗ?t|&7|Ջ}ۻgs (OoݧQ26__'pvuz}/c[߇߳O@dE|ׯYo0V{ues>iΟV8oCvݻQ>i2uR:;KEA 3x,2乞yAzbq MϢnꌿwۤ ,x5_|lp;IT:OU29viο[K7aI X7ȸ^'ї˾߸QE:{ƍ~fdImz+N.uuN `ɖDFl'טc0\YI|htB/0N_?hտ'~FFS1g"`#.7f˟@T7gLŴYLS]@Ƿcyn1^VU?}vJP{1Ei6=Gn6;^CO)uE Vt1h#`Ä",c,2Ѹ0_nQ3g "]ף6bMwgj;+ޝ-# .zBR,vuRWso&c͑wmvXp=A:Z^ӿwcjsM,;}{^W9CCGÚVGfK0|f uFn5:f] I{߳{\+S$ߎ|;?ބ5̆! FiYw\%~ø)5Fb]1յv啮'THSU!)k4Ng!J^TmD x@E_a/ffLSTӝ kRMÍwӄ YxMy|jhbNěFP}EنQ6M2!H--d7A(ީ}&22٩fadr}ދk ҄;9Ɖ"s8O5a|cPo+Lv+3($L3sUNn\ŽZPOM򚦍mǧN >m$ly*HTo&Ik5G-{tZS4!}N;s$k7K/X)BDڨ[u<;~r5JТ3%mSuWeWR #de(+蟿FNaFU%i擋 ) 7m[i&} vhS@ )|+5ݣO[b~Ǻacx)\2u;]Hڝ7m.GG&V,ĎSB99Rэqj<Hs୼c0Y7Z`bwBE}7H#(SH]w']8;-H(A HT>Ҳ߿c erңѐV‰{{㉽v79_~o}y M-v[q/3~U(q'ZEWhn!Iij9uZh"/ۚdCktE6E5`2 QNe/g{,]]dsg9ި?O%?ٖg̖RQY^K|.8򔅊l3fܣyM!?}vgº˷ffޫ7?h݁=k]VI᱊&9vKH):n򎇌hK;jl5]9wt˫\Ui4\UI2 b0\ST6;tF˫Mۍj4TbA`,{1فF?EvE#B t~8ƙ }xn Ӯ(!ުzV<RiUk{W{ÈdA6 e1Mju򚄨Qupz]מptJ)XQznRwnV^;}w*Z=v j%EE b7ѐ䘴 uܙJD, I+Vp8RcРTc㷨fulRrD3Qr;mX/DC W%%a7ryH&"ucNwDL0łN 1B8&rJsAdHJ+ &"so4HF016HOJr\ qn{&`*>j5;aə-m,\ 8N_a͢#1@dѢ*#gJi2q{#5%N-dn6QXS-E'7ZTԮ;NIvhU?J#U@-bpxU~\ָK5>z9Y*vB0EN' ]sm{wQK;٭ud:O—_L]|5&j9+\]y5Ǯ*IOާ,]+ՆݍS DW|q]#p,'xqdˈbs*Pf\u҉┣l["":ids?݁6S;%źV OE6̣7N)l9UiRx9MkSO:yIxh4{$NdGFA'N%G NjPk4ztW؇^9-8ZqܣL-~zќ}+YJl` ZgJ.Up&D[%o+8䂐̭ڲ EA*ƥ@Av]̌\S` r QyNj3\˳ؿ}ys| y ^2mݹ.c5q\ySҒӫCjpaf헥DK>5yjr[-~my<# lh7gp}?LM_v3gſBſwnxt9*;)J?Pj <Ӝs}18DvJ|㝟:/ޡjvօҘTN69tLq@l[փfc9չ|#wnX\K1h`vS"u%43-˒M,7nIt9>0[ }*'/Knq=:Lrօ%'>^_[DG~t$) F0*dRx( ^NcrѕH0Cr^tZ}[/nY—x_ }|Zw6}t.jѣ\a {vya>AXqzꆾπ^ZɊL|9/u[ɧQMUzY+h{ػe>,]+M踣.M8>r]mxKN+x99Їbzo2((?ͿV5pf51 t33$Bl*N~+pk/Nrɓs?W#}V܋c %:F61O B"R괈VAΗRII-w51FꌯԺ kn3r\bPp hȋ10P@l~^Dk1>Ap} Zj!DŽ£!W A s%GQƆG ! Wr8KhiuD09MP_olRqfDjzKllA HH)quHr7Ock%j@H}޲v%"}w;aWLa#p+7ܷ$fNY770jypP]濵,"eϷɓ,@˕(Ru:i@^])C\і# + =]IUB"cf6d \9B)0nw`}=]]tV2]~\y3S~o+FF2,/`p(ʕW.̧W2'-0R#+R:CQGE*"$Ws wڟ1:i8LNM zS + 9Sp }+U&f0x`xYM sRjs",K[2%rkl067q'(iQV0z6Mcڗ̻= X#¡ b CUa,+UT2 L zwx_U >Y@鶿͌[,] Jl ~A{Ajзy8/6fXT p\~)HIGNZQ~=}aX%®ϳG R,uݺV-1<^?Q뵤:OozBOPAv>\WpF)!ԌF͇V*k a/yKFF؆'kK?MNˇ/ .,–pqK(ͺ'su@  [R80ݠVRpUWHAqO1%fXWٯRxqGGNc[R K 6/'u_$HG p[ z#%p7T#ʯwҖVi)0.e9')W>C271 \G"jwM @a']~5+wo:FjByKJ+jG{Z\x rh$i;k]O(;k8לtvǦb&Ѹ82PqQ%:sZZZ6w$/I .c 0d9⌖3 X&H/ p%b{Mg/ĔrPwޏ30OcλV=C@^}fSjW~AͷW%-3.k`f5r%-P5Q3jεP"6@ Jh\hTDp] r++t)eفüGԞgy|-p Q~GYt<(!`TBKS \q ]Bx[¥t8/MIX# h!؁сx|ݔY40RL4hFC4{MO;ß!Q!>X[xZ+&:tR!@7Թq&DH wsroKĹ0QM*@\*G([m 3΋9NrYߠp Q)N ''Dik3uC1(TTn0HĿ©M/i@Ӡϒٟ[0L!jՏ5 HXfF9% P)5k`q%ig bcU ڠdJSS,snyQ+A@ 2DmCib_ pLW|ɧ+8m,QB'[}LPo+t(`Nnx` "lݨ ݦr B!V L01$BPd=Fa'A 8v  );|8AоD= ާK6kM9c3 r5s{@눏j֦(4S-|>[vadʔu q;kB~RAUr7Ly7" [1f҉OP>ITј]{8Ja"&21v]^nQe8i9-VQې;Z(KS8nȟ,AdS&VYAqжO58Y=ϖ~*R׵ :n>I%D A4C6QP >\4,CǜMb Bpp!0)UX$Co`=[ަJ`zu:Z&0^2i0)TK qN/Rv_5 )R_;hㇻݰVc*) l6'+؆=-5G0>ǣĚSccDWٶ>wX»c5Xf8(`v1J'3U. F-/A2\e Wll=ZYʢpF%VJ*!ʢ(뵲 CZʡjUDۛ6IKa5G 瑔 ;B*]/e7FQYIR'~W*:6FCQc`?ͿV50#f5q2si*S>z05^LFr`EPqXBѺ\Z=B𽻦箜ANPE,@O.lgWSc4DR"Sen , ApKʹ`_ j=jok#B4ܑ1"{ktw>V@tbH$X3{rki<1Ͼ<5Xt 1 '?_|ޭ,JdP6`(D#X+i.ی i2DA)sdpP;+[!l܂3zخ4CSY54?i,a](/nB>HQS) 7-߸YNCߞ :U\LyFRhj2fʌCe7֡;ś :8 -!;n}&̝ sl\a R9WO"f.+>5xHdo=蘝T;椮v^]?S @U<&R *2(A]}T{hqJs͕1]fLBR6$5׷$ Rt[P^6ZUϘZd|#ݨpvR'}4 yAeB=~ +թ4Hwۭ `r 2}X.m1C˙L-8$V;fh%雄rnTK-1j\1Oo ta96ULkmX˞z_Rٍ9/ c!)F$%)R\)2 MWU]]U]];Qfjj)Gm3(]y ijwv֙#uVőS8_. }޸vJ"*Idg6c6יVQsEO`YӶ{ˉ_aNlm |b(_EC\okv@3#PVLv>rkDZ<$'m?TV9qt^%8*\;7'HI7?E~׏d3}a)dP3L"9ilԭO7VW]GUe2Aa&7s;4; 8Kj̙/oL߰ow[v==q_\hy.a btǾz/ ԩo>{Q_a#WjA 4_# ݺvu񍢄KҕfG٧(Xfsٞ=-`~#G+,:IfcEȟҾPY2WS֨n&㵈3*E5Wm\ RQ#?ܚ;t̻4|E:'\Ѩ4*rWfdLb~ wש?qd0^4Q]WAl~[d%폷B2&֫p; ( ?3z}BKV(Zg␢4m8/Y%kG yF%m0 `&np`ͰNLb~;l.ez,߲|mӒMwuE;*P؄gQ@UΔ ի*'=.}vøPn۲3aB {v A PһS'zx$TULv*=e,}+7<874 xaw =&YM'j\긭Lw [sI(e!$*d K[—beo ]cE-x{_7].l|p8vƏQkي5%֌d-UT̒>+t67GzXjDD˭ʴ/@d*Sm93?]6\' ^ͭ쬁xN}9lM#g&S)K >WQQhVު,Tqj`E|2ߞ4z9@Ŵs7_ 4ӓ$\@ DA Ɲ_^t2R4 ܌z~ǧ.l弄`r=⯘@2Kqj]@ ĩN *  RՆ9H z `9*wY0NBx ;_3|>UTSE11˔hj瀞 XlaGJUSA2:a`ֳ瓵˃eJSm <cNR +S`T+ߗRJ[V [WEޓ#_5׾ Z\-Ӻ߁ =ssEgڟ"[#}xzߍx3ĪWwww uuWTV# I0J:1) cB"epdiL@$Jg$P6TTTYf1AĆJNQJ/൪zkJ`rׄO&U^n(VR߲Kl3--o8b%(.4kSF_K,pJätO;Zd#4|1Myzޝz%Y_!8+B(i.\9maK¶eoBgZ0`,Z_.m%wF;Wx Mv8-9O^ kAb5Q4 R*6YĞ‚(n "xZf I7ϴF)aJ 5[(R*?cq& I VkAsZK*gux*!X4TQu˨*^3lr39tm@ \' (͹F(BNf3uC`c7RdJ|вil'9WJ'R&]I8&u$m<](ъNI -$X*FTSxhڋ?Jo3*2PdZ6/U7gt=8Fh%qG1Jӎʟ*r1I;n|*'TJES͈T“eB]6;'kJ10,`^ 왐38GGyZGQ&j/xCɏ,i|ukSGS Kڒ5>BqisAi;?G AC GE!,*7υ ]n$IG ȅ1ψ&Kn)0Ł47ӛ|Xyסъ#p'jJu*q &T4?^V tHuRB'g00cB!W}Տ7ak!7o`kٙsxwGn̈́."T4tRySdPWu]R9%8k"vFNJV#@oH-̫΂<) Sݕ0ΛUPz͙Z0Tn8bYKhWqo3#*HC0`,rmCl#KUV&,C#<9EOh38R2LA)Re6+ݗzK)R} $<[@5R^U! u/ZTnRfO&]cP&Xn,sѨs|܍cCz~agw04vBƈ?F_ӛޏscceǘS't_dQ:L$FlK[$aq)E?/i&*=u1 y{.׀R&ERB*%S„R&Jռ^+b`$ݢ,y1mJ81=\zypr"ZF(YBqN:a vkZB䩐imsj*$ELe쵮Fx׵4sҢv+A);F^>6JO4vBB\DKȔXVX2ђʄc[XbuLiP`{Ϩ2esn~ᗀ. nXq>]ƔYULZH謷H^d`E9E&'f |jt *AɎ,ۢ14텤wPx€%¥ YR'he'+;26Y&+9#6mN mi2G\j.pv]gINu]r ½swraՈ(>  f|qe9P lG>9S@5JO{G}>\FO̵7fOc|Zkƹ6~ ({Z ʴ׍3k1n+LwxQ+l ci^zO7dgT(d[R!/w1ftɗ3ߕNcva0OsnyF 4vE7_? |GN5)=ݜ:;vV* @1Zٙ\l4ͪ N%Zp-j5$3WsvLfyqL?7gD>$X /fJt҄\r4#4|ha18ya1<1Ou;ҭ%$FsyOɤB2S/SW(Py}WXMV0O `Jr%q%HԍD:5NswP *Æ *Bh-*;v)vL Karr,MQRQ9gQeP0&3 &7a`48s(XDSa`A*i֫36N_6 0y "eGjx2]p8][!t6/;C3~Z_F#x>(%޵6n$E/[`8l]nKA?g|-ǒg]jJ)HOL,S⯪ѬݿQ*͔)fyMEqY(jp8~gգޓ'm9fV'2iK2441|9b{;ARHA,(GnIPB-F ZO wcI]u$Z #Z`L8d V;1m4ѣ #+" yHawRD?""zAIXHFt](:gOR{ɜB)l9@@BFn X9=)8Z=PZkzQG%J I \(nj' K~̘{mmjHhDV{j0eݮ0 V[㸻(&3+(18m]*6+4pYN5_` -6 TU% _3`%ō':06y-1DiIqGtL#/1A %tVIb!O)xERdK);8%wc -U2^ĩᄀEdԌY cg"1rN\QArgtc<ԩ6jp9)n,YTIt.)+h Q+L(6Fbʍ&2?˖at]Q- jlCc]11uJKT.(S'-EcTw'tk9nh)b#zY馄I`!(nAڬG{pzoW_u+& I&vt"؍

w)}sLJE5ڝ)h%ǧ\*ʌʖt)Z}+l7"v F TXRARʝ Dk?5݋^h?qG ފ?F\2yy֫}Zб=FoeW!&Ogkj]2O'7s[c^dsqݎG֫_:(5qʊJљ bd~ׯ.q'p5_^c!hZmzh` xl.bƶcTҁxQlh#lxAMh5P! s'(N+0,x:Zi6~gH0zFqlC)l8};FZiYڨUQEV'Q`9ݛ؋짒fM9:{\? wsfLUS25`7o'N~5{kD;_X^]ߡ߆>#~0Ǜ_Q?ZS"5rYfSiF(,%"}X6મhǓ$y-nS멶ÝĉDVwty|aI3io\žs?<Kv\hq:^k1 \2yK*8RO Fb.j+U^q@74=/,s5a5 gTjIVwE }ixS4U WaWT<R FIt|u"@fq2vRkz 9OFIJ{%3f}:W?!1\Ӿ~:7p)+JEsVXSbF'Ep;#48k.O2|UQ> Km)H.@PG%I5@-I⿜$ιzאX7)݈rKj` ,+D)!*<A4 y@->IvQ]YP" S =)e6y<(H)t>1hsyO#H3n[9+ɢܰ 7#gNpJ4="A9P֭8Tp[4Qϧ64Dh6X/,$b0 i,$ƂΰRL\ܰ|*&E FF4txnQ^=Pvm YWIwJ(',sĉ3KW_翲xK&e_N͵ ڪEo )P9P@}bD0tQ\'j|"77KIq!!B'&}YKvìYf}P|_Z byq,_='q(:JhC $CPG;$nPmNZMdyښDZcLj $0C{S9!hƲ~>VH<ǫ6ʨ\!$gf&Tv{5 nтX(GR"i-QQSSc-TeԾq[%}m}z +D&Y"ܱٕDK2z)1(xOka U2@BE# 4qj(9iDưmiI2|J7B/_j׭{U*Mm.bAݡWԇ[{Fşշr}72m0wc|w cr}J+qΐk6Țn,UVEG]ZGTT6`T}j6<">,}Kի-%.t绻m:ȘNyp}|M *o~?YW{WQs8çߟkO7ǧ'u&V 32,!k n,q?8n*@)JMq3o]v=uJ Ϙyv̛0/<ܘi_YIK8؍}89hP\ NLFCpl^)zJz.HYhȀ "_lfmCE/3jޖNqVl*fE]uZi|x@m{9فF7g?fÉǟ&/6~?fU2jf;[~K͋Y{>_C62֔Y 6)*6Ov^&_V. @hѳp;MJ j^kђ08vmL N*i ~y9/dqMYz'cՅp5.>{s>\\խju u6"!ChG*Q!5jź͓6@:-=[ֶc-cSDWZŊl祽@xn<̻'y _5ﭏW%Ӣغr";[˿\#ҹpUXS+Y7$SV+|EWH\b 7 {-U><[%C:/-Q"plBz%%d+bF]ꐰ1*BhF ;U 9ݟ[!^1b꺸kExAq6_9og ukP/l2\9# R >ol#b{؏78Cd^ڶ1|벧dVGOm.nJ6ߺ5λ0+i%54YXv =]zr'e=v"-&rSBF>,7n#+>]@_Տp9t/ȎwgF2AMspe4ꮮ*V#9M mT蔼o,8f،`OoHbVz&B޸ #{AXi%9rMk-I ..'_j@X9%{@azGLx&&gEqR<{..iMj]U84'DtefXnH&Y%(yg21åv~RQTH}q}1;-WF7iUZ# '} qo0Io>l%Jz%[Tp6IѠqQ q0Jsb^eYQ\$1J?CKKPR;eA`Fd#e<ד)jGn!7>ѳ@dэ|Rb*sF{1yb(s=>B`*><K\&W-eAr8v{IЋ҆@dX 틥XX7N ]pň闠fpv7qw%@^1η+?w (oլrfƵJewzf趫&MY0  <1`*9̀ n= RrI8>; xrˋx=;*_?!8Ŝ Dx?A%SC1"EʢSk K3\ښz"h0A襍YqF HSi Ѱ miGKkV,eA f=X+4FE,jUheqm˨PT 29N*k}R捡1a|Pq'Q~}8t= ÙwyᇱcOe|`~] `Th5?mq =/[6g]Ƕ/>N5wt5[r[{/$ },cy7fTd0p$Vк7.}uoH0 S.Pf0Xr4zz MSmÐ&uPZMJkV_7[sde\:0 ZG ?fCD@ b4 u`nXkռVji4uD W$"ִ68aY:]HkiH!0 LzƧ !-3# \ # SfDǜ%<|)4U^D PVi*kB9)ZY( [^"B}lU>CH{2- 4/x4g70"XP፲\2;28#4P.|Bhrh/ *R c!3118yb'DD z"MQ0v94*3u8ǓEV.?Xz0_x=Cb_>e۩wayIxXMoJe߲o??m\ HR95)#LVVosϯOOꋻr0?)~J'|7K{?aY%B?=\_Imb3Q)1xիopK3)@ktg; 8*5PљSF1bN*aGr?CǷB"LPWlqBOǔtkC$_ڐr?\/z*њA`%5Z R;>C'ǯ]Db"VIRi`:h2y˱mC)NZ*gϏbc:DƧ'mi @1 FKyM ?Xjׄ(Z%Wa)<qg^c|g޾HM194@@rFpmіtxueXl:E 1Vb,v%q\ fbGTOY+Mi f]NȄxIv;+'2Lia0p͐WfȺa~q7!CH #&[\=_' H,){ګlR+-2EZCXT ,STxܺœwu7$T  v=p"neEA>Q-Cz'5S#!IÊܓS(vRGsݞ@#)*`'9f7T NNB\58O TFW[@FkW8{U^ R۲5wY5 xt]T[bԩ;w 'N|fÙ(Q;(Q"8ڐ&9@JNw 4}D\kEW˒$^7nf?$}{?ˀ,G-_16XHe_c!yz&^l2<V'>z1_G>ݹ]_+rhDh{9Y̞TJny5TeiŧhQ2=núq[)9S]S`Yr:L Oѽ產i虦4/Mi-4ހZœ,ΛTCϸ/Kc∨TP4[hVVl)9ch -Ƌ˛+4B~G3*@/vw׏9(k$ؽmkǫT] nxVײVCb԰9dau ^XKu*hļ1p (%Vo)w^isn@uFu/2ӌWֿ.ߞl\4ߝ=:[<ǰx&I7$G +I}S1>VҦYPLސ!0&kXL[ DUV9SŨju݆A%qS i-|LacM$ZՖڧz:ܙ@} 4Z1B YƝ2畿-&YDiL'01mFMjjԀ!M."𽢺UNPٝV&M]t1?D@g-QJ`xh`ơaд>aT#_~^,ƹ[]s!e\yDzk4 \Xfa*JXHIoXFClg w#&aZ؈3F#FnS&+W/ UejŹ 1zQңcP)8#p9\UhkU)-`DOL%qN*-iRyΗկ;E_4&GK-K'aba"k=~=3G>}})sQ5.z_TH|qq[tUHoHStgKüIhwådo\RM ~D\w~5ko^UDdbc<(H np}c6H9[RGx|=#U פdyA&EݐyJ1ȡ)Gù]I#2G F}̐-SjuC`J1ș:bСNx֭As[3/>Eߓ}d'I3b$*.&TI,N5bdqJ7CAvC& ܞd=_ʂŁec߬S|_M|}kq#7EMNkxgр&5% dmefI3SliF-5V_°wF-vbYh.gɹpWٻE:z;v6|~tЃB TZ fѐEwR7 J6)P8,iD0yҌͨG(e@F[QHCJ tW@oߊp)JYQ1dX}Q)$z9>/ͺR>6'i3CPPC;ѽ3|}<[][t>LR(qŠ(  li+m aw|ڍ{@ ͙b9-'y#%Mr? y+p deN&y]L%[R 2g_9pRwL1TϴvZĴ˓ij3_ILOgg8ږ [JѪELDM"쉈: P6^ʠUclD_o߶hMa-"15P| )"(U_R@؋hz~(xzxv5ą0 K$Q (Gy}E~{5!3M5}ѫ2k3?,[?͌˫FUt,uu X(Y>TL*8 FӻOGͰ|pNj璊/.j.= : jt|vu^U3췸HuJI$`e$xLjʄ J,)g:.R 5$Qr@9XO3ȼ&ΡeM u@R\ ,Sh(pK4B *WR5pvịQX{3],s98Ψ!ꦗYUv0=͸_֥hYH A:).i#fEN JUЩ6du~Q7oЉɰwH:?8M(ZgpT3FիV6 ?!p`i1tPx8PͷLZ"†R Q` Ѽ$g|S8g_3)Q;~23i1/J,#&KQ <ѩCՉ,UFфR KKٲCMp=%uE5ĊMք*/x,m] f݃j͞MlZzZ!F bkJS%rD[R2ـ-htV Nʟr1c U0ddqḒ^}g]u {ϑh j4 M k7W[Oږ}'}wM_riXdCx{3Ms'i߇oߍTPtTxeϯS5ݹ@PTPҲs+yD6V>JQHɘ0ʪ[=sV/- J5SZO $i>:C BN0O %1B!$5R W\XO|`zPT6dRQ?.}*5G?Y _E P\D-=#QEOu^})w'l6xswѷ)Ǣ0nnls6AA&ĈBiC%t@LULq:/%$ o˓FMxiK|V ~+(acEF/7ݩ袟|ϔ>vZBXG1 HaEŨ*nћ2Cc(b7t.eѭGS Ӌ~Z0~2޿]C;{iKhahEQbvb*%&ZR%׷6x%+,2q.5QZKUʩx': lyfPr]jqŽbU<s ="h-9̨h5KAWZ:pVU(v\*lGT69Jӄ 293 5"M,Vt*Ipq2eCz)!>fG ihrj&6fU9.c'9\^ALwOo۵}*=j킙\0:N&-G{kݽ6D/n\W+\u;?HW]6/Cr MeJWMuPSSa'kN[ɽU0IS5O$mN_ZK !=X3HsEi!aeuZ(Cpås)@)b i2͙VVvi%;^511cd1hueI`AMAIEuvidLO122ntj3 ڦ.>󭁖JrLED+7فrYy*-:9;/ƔES4\9}/-U XԪW4hP{dIZ-bKTGK0)Z/[;y֯aNl;HVXdIKQ+JQD% Li#s9HPZ1TlZhI{:J\>K86LШS_3ً^tu'~:In_]^6Ytmޯל4YT+T)㡱=xВ t6Vam" !uxm*HV 1(w>7TP,LQz ۏiJǏxMF6T(?XI&f7ggdr}Bb7zQ6}>6ʾCnjM(E,=~'J53+;eL{bre6cQUh)vO77VXY{x؁V{bm$苝O?Vot oY*k*2/79r5ZBӞ!hn ` 0ǃ(¬Ԃ*Bt2P׃JPIT2K|HBXg!OPgeaZgڲpGJ v`E`ð cT|\ګ>pyv@Ґv|au=\l-@wy`SDv.Z&s,eJr.DxehXDp%Cgզ Lc. UPz+Jz\;6+qIBRCWW õMf0և,p-kѕ@v]=a}ߥt6ؐZ#n22p)5]0oyp_9wS H!34Dc uSREkkNj(:U^{i[9qD>}:I(uT!.CPi$;ޱeΪyKC 2Qϳ}]2 ᓤҢgz# @Mݬ2XۉbϫK%5X ϫ>@IBzZvqc|g%?x8,iYYaY\Paŭ*@8i}v PN阧zR Am1ӂj*681T!#mL\<HdAtHYkTTlU:ە{%U=ltI5=0{vzgX &K2gz8_!A("Y8{6OM ITHʹ!T4b{f8e)x>VѱTKjJhfuFhyߌԡ)MIDRE6TI*J5*L<"p\9)M+yzax?²iԧ,دX1~Z:Sp)V.:P-7lְ =9n_p&#\0Hegnt}v=Ń9 @}.dH#FG aʨQ c*i#!/OMDfRms])ܬ_׾ˉPurrա8? EXpz+:VZYu!武YK92@GBțY65i.c@p*"2lZ6V ~j"R^fK0pfK JUn/b !(z&؞Mf`XG_Jd}{eϷ韔rY% ~>pql|"hbOS4eLЗ) ~LӱIrI<KP.*/QDjR*pkV,;bcϻ%诋lL7s6t1z#,wT/),[bh!j,) XY|&6 9*$k) eE(=S8*LC`'Lu4("$A %PJ| -/0Hc1̘2MafF#Xy^rE0Y 3 "󣙀!mD`8hRI.sq4@6`g9i 6  N۾GNEnkܕUJt%Ys iՕ*$K`q+i} J x;Bb;Nff+XOlU爃v JEcK$B[sEĥ+ɛwhRΤd&}zhaHԐ?5Ǻ XHk$ ?]9{vO P[&v9ީM݁D '8+U4zŤe+čyKآ;={Ti2 RVLY޹Me"T&3 O48c'."3 ® ;4k Ti2It+ʔTbH;$5' TM8x;`U2]M׈ʽ@|P' 6(y%A~yf,\K|]ъ' Qw:taVj4Ejf`3{3>(]-lV4m9pcr;}ssX G^ݽܬ~/Gc0WwdJ(<40/RaMa}SHP#(F]S伓{b<=*m3*++3z 7ӽz1h w[XzJxӛԧơޛq·fS!$WcB*גw;ug8$ 8\ J'>K@4l4I5<&.]Q$[l#`m.Fci:6E;Ž eb3d>& X@Q4YMU/Mfg6ktv1.8t1*qAOz:z_"'L2eUOHR=[>E4#Ya~јbN.˱yΤqsqCT?+0S=ӽY/F[B(>GX!qLgKCF3Hn8c%]pdD|$Q}0aQHtndTKRۢR7d^9l[_8BqE 5=^a!.,2D#.vy5"7k}à BU^pcE F7޽(JGA1yWcf=߿|eO3J`ԲugvΜӫSp*?:)$GB"!9)$^18,s9bs<+!FBdŽ+Zj.7NUwŽ+Hl  & Mҵ+*4B&#-ᝓTNZ͐_+]Qz9Js@߼iPS w0;cL~2=%Gj*eI ӏ\/ Fw6 -J$i_&X3k: ~U\UW׫\̩ D W~.7`1ޕ*!-P09fjLQumӅ+%ZaRFƪ#u"*E ntHQ;RWj(UI?cv.*] svN*!* rq1Y c-!j0.'{50=d@ f*g6kX3㼽 k+qu|Ax5iUpkWt"ޥEp1hjӑAWgR,Wpՙ/~J0~?f"a5f:_e&:ꤼy ¦\{2k/kf{r{ΦW+_BP`i# aG0DK6>؏gFN˻}DL-V/]O}V1ismacgū.IoV\ih[*3r A[OJ$FVsRD¼#b^F.:k!tge)ր pXM y\|[Me)#R['eL4#6/swQ,[en$N %WH3Tjr.#]\MAgP|мJ!Ws}mJ5֌ D9f*fYX")a\O7v6Fh ]8\.~,{ $)re@[ˀ!be{\7W!e滈<[$&Ǜtbr+4Lvz#2w7bH%w>6 Dr .vyxbs'9wޕs8#NCcކdzhlq:/&~Z|'Q2GSNUGeHtcçLd{'^23A;ݣo=HDiMҕKS?V߸~ M2FjwLcAF|Z%#+[,-kqݕ_ew?cY97+'XJ(FiyR/4̗ M"#U=JI  "rO~d<=r+_+YR\ !&wQnY7*eThJ1Q6X㹝m][Z643WeݠU:VAȺͷnͶn-hgE:֍AJ1Q6X#HRͺjeZ643WуuJI*+{e^7o{aQWb#Rnv?C_biЮp%gL #aWׁ"g%rT'ѷ)XBeĉ[lAE1iC5t㏘׷&~ 햴(U4UF#",dqEju^5Ueptft9;7ΕиvT5h""Xu,IJuG |8 %c^f^ } Y"Y2fVȱNÞv<U*r[G%=譆,H4\99^&fY$hcDSֆQ4wMΪ含dN5 ؆8A9ˢYd.Wyd>9`ɗ|0Næ4W(Ԁ4BWbՕXZZi7{xp`CjqOeZ `դ%3ݗ7AVyAU1ڮ>VGs] =E՞YKz=6Q|ti-[ۀXJE:.K[ME %z yB7RJ6J +9OI1FN.OtFK5|Mžϯ&.0_L,vd ?ieҨˤQI.~8*21k0a k9 {91Ɣ S-@]ަA˛?qã҆.F[t Gwb~mT[~9m~"v-I0* b:*/hN!=D&7+l6h+pljNB*@Kb*k.W|iD`XMgO|(Wi{k^hf&պWe%zn| 2;/ι`"N0yu / fQ;1N4 RDk6iU<(>HcAsa8]DR"0JSHBZwmIp;/zfzlf%J);&)Eΐ=wq GawSUU2M5$N !F (_&3B aR9RI* jFόD U - gWWݕ˽'߯XB]hӓa$(qKmFu./ʸehLxy ZN~6uGJ,5cc*դ1l[#zwpKȕnPStUGBn[̊%9hM/#7g\cKLoӎ3sKviVؘlĦ-YiKpQ[ԇ=d%ӆXȭsk;uܒXz}lz>e8&k: PJO~'Ժzx~8 TL)hK-J%).jJO/RJK$ۦhzKFލu)ѓF4zq?y֏sȻԅ7Iǥ7~,U;eIΙT|9V`9+UbZdCysX@{r:Ubvj~!r{,c[ Ht&씔j>xV]Yk9ګ NѺ2x5ː$lf6xTN޸Z- u^WmxqUe}6vD-u.\l_ Iz9Rz +u2 IÔ>66@V"q~B^/V=OxN(oZ5/GkpȈ peOyjSZ73}Lp+t0kg_ąjQKEP֦lX4Pċpp?ٱ ~YF$K ob,FayTG[pjVͲ}5sJ՘Ԏ&7"> WF z~U]zc4UJ4QV9 ׃h=XH/G;p[rRʫE]vs_ &p>Юryz>#<_Xo?qI2 FOG޽}[) "z ~i]!&ҏL~bˍp|x?~f4J ķ\ SN5v16ϳlx O~ 9)QZE! <0{(MB SQ\~}1fɗcn4nFeڢrHf^NS6

##AxcȩN~.{(9}-w )@0U\ʣ) 'k~Mc y=;Ȧ аNk>ߝy;tliދ^dG_eb17F.uJL)Q%UW;%UD L,ʈٔG::JTTa_ЋL3Yh%P -ņs=@f T飧Fu!Nh3RHRYh-}uFX;?EzZIZÈ(hB;2@n:t7I?DB0jn5Y%tR4g¦ZU:_  @q,C1,B pcV;wK8[pMQ|KPkE|ϿF85{{ hc8Z zL# XAH]2S"Ey 1oP"R \L9ȸW4N$k65q_|w*E$?-W]gYMGGKjhp 1D7cgt*/}s.JMQmʄȤ<RX MSWb̑aA9!su4.RT !Iq Alٻ0d^0ser$F2$`(t*4,e"n#ؑݸgJwU戢6<5' '1QWc‚.q;V7A'٧SѺ:;?75_FYtV!N+CLj61zd\Iҗ (B ҉صnpޟ;/Nu +܋fJF tEt .H9 ^#/X2V^%SLpC@JPې t)/&x.@j<F 3%ewK$Z.v$d;>#Z:}δ߂+ŪoN+7/nG\g)gqe1]CrP6AO']K&pÆߨwPW_3Z_ybbK;_crֹ7άAt 7Di¢Q%o=|0J,ytXuJMW k5d3'M&U[XF#+4ux=ueD͈#|\/u& \{Yq-?'q[qOq3Ggn6jH,t':C-պw; ؙ+A dOW;_N}&+9~&uO׳>3bi3`ղ7Õp:0\ W~6HHNcD2b,q4)ʐJk钷kb0REJK&pqk'A<|'֖fkRuדXĮfr5dz~-LfiOcƹ.UopCj]/'LKݣxٻ6rdWx $;9`2O0$;с#<(RKVl8QSWEUta.WUt%H[RK'bQtx*^jKigDay“SnGY)%Wj*_S3_c9c'E)Pcd @򝪷]nRC"F4kB=F7 +d'kAMk@ AIfM'&h& H1Nz\j+SJoexj}]񺓟ί\Io8V74{ mV2G(ϙ+7-q{:r:l2xF0xyYrIo^8 ^$9*z`2l!UST~{ʔIdeM׉_c8cdpGTJZs !xղˇ} zDPV_I :lZJ'oYZxbJi KVhHiy>ueeTBeW_h*W$HI,>.0FȖ^"X$<04BP/BԈ7J+B(R3AeBI!.z)I) 6D nsDU$ JM.BIblƥe|=qr$\%C} 'F^\.r] JA ƺU4I4IgR |3Spz*-K}a4T[ Q`7VXSR _+T|7v2|+g]R* %aUZ *n֕gu(≛GRD$jpD Hp8sR1++R3AX{G@ޯR)]UT?x%dqjػǕ+܅壟_vrB'_ kN|F.NG^~Φ߹Yl#hR7*o\2IF+G/xLCjा&RUDN6UIMzmҫMZu1˄CPm=+=)<* -wд{j{Iv(/|893ى1['̻IE⧄BCFSh9y nRD- @ —{_C 0̕, :<6nPޛ02͔ؾ ~%ߧ77>\xCiwkc@4'οJXu` (/P/n(>FR*))Z+ * En@-A95gHc)ED=(0p&H)FV1%"i%YB,u$ _Vސbg/i-`F88xrɒ hqH4 C_ B 8Hֹ5'‘VАT{9ӜdhFhd1H ,:ӃbQC}433ˮيJptl w5 \\>C^C?zSDLs [22܂i%UW`AZJ!"j[v!7;;/=@@)gKwj5`|t Axw>bǒGk<Vh5bl׋чO꣚!6Z}ǖ5{X4juu V/7(t֏<FO eW>n~V{jvl v*WZſryN-+WZ?TBtjI>Z8-vӁW%nmHȟ\DdJ;SV/|LgwbeOxZ SF3֤:͜2P[8=)@"0,ރ2%m!Lէ]/ pЯ\azȒQřc,j&pqbΠy"T(6EAC+˵#dDV˞JK֕T(ρi&|)ɋ~J$Pƕ D@?"Rtr/(]I7pAEMX_=_ x:x{r;9n8IlݫijxejeՏ?2f@LV9)-*5Xg( (eJFTzBJP tV.jQ[BC"C&FC1:X};Dv"Nc^pwJ PʣaX~{df5!q'Iߘ5/GC&k1!3!KZGQW@;_A)CmyA+J#-x!@k&QKrE<(9iΩM*{ Gvz EЭn j8WR+2An*?^vMȞl3G+#KAV%;%օ y&a [>r9iK%1?bЦ_~jG w; k_+8s@JeL,q*ҧ+,AK*y0*f=a1uyEc"R!N yԦKzsMn|8jp|w<֊QNŐ^e&Ff8("+skL b@$Rr;|rq.Fآ1j*e[<Ԕ/Uؓ @mWwY2/4ȚV g2K$icȋw)˵0lxu7z!݉MoݼrȘ*՝oa!/ Z6b \"S@cS*&A:^cH.j.Ēij.Ē8 Ҋ 5鳰"k)98+2NQu۪HMvZ><}>tJ%[Z[Հ[)dV ]7P;uTfok{?Ź 2Pb`[[G\{x;:> $[5fOK~O$\WyJ8?J8?J*%kX RPZ=ߗu ֭fKp!JwPQ u omQP{NGo`:ýޕt-;f;g1Z@ g5g(i,H.RͽYʼAAW94;?:@5.Q6l֜SrdgsT&B<(mQ1 ,假B,N'ФZ*gi}4/|+f٣hq,NDת:@q˛PEQ[Bjܑp,`I SN=GUqf `v ) Rrc71:+f Jw#ʤ=|/*,%Eiks|N8_7Ō㜝}-/$|)oK`*WTp_b Z~U`a4 J.PBj@_)|kClͱ\I?,K!C.;FˈwSs"f?Y;Z\sUHAZylnxVD_ywd2p$PEQjPE2'k*BhQm8*` jQ~J޳HS OUJCj-yE9ۡJ1{0&ƗRU_Kݔm^d% [/8*yw& ̗)+7zJ΋ !5WLZnaGЀ'(GpϊU?O5Ub|i$?>&FiNQ.DEvX+apXvk%P{9z5J-+|Np$oA'UOK"YhA0rZc!iBQX+@PtOQ;`(OWsjP5Cjh<3-'ITTeO#pӆ\""Xbc:йbC?n? _&jyQoL]ڙFЏޡC?z]!^sB6H*19s's!x.sjreyF9Y'ߞw%WϫxPP]ޕc%t6Yr|tֿLgf&?yd<ꎱ;-^/OSQ:&5uP  ;dP16τ-xߪx햯$b!m|2/ U8Bs1C:MP-R &ATp[IA}TLCnaӘ+v4x~*nlMl+|bUJxbUx_F[RٖMfT-ֽ:zJN_ݺwQWBp2)o)ITBprBeTxTBpJ9!$DwI[{]{5ֵO5 .9]Ʌg\Qg Zx_N8lk98Rp- J!QeEalPi>%D܇g\ Ȣ#<)\0̎^%Td9'N0WdV:[Ҝ3j0J0˒̃b\6{~8^Z|LLlnvVd^>!E¶dH\N(:8L[}6]#Pur?{p:CrA~vלxqu`< ץ< =/!7_=}\0)o&!;ϻ{k}a?7<O YHp}~U'ͭL^,+c@wDh_jh,<.qqSKk$tAT+XTN*R{~ ~@UvGٔ`K3(2q}W2`dû/?ӯKr&9fvY-V/Ўho6R~Ε}p#'ݠPA"JjtZ[oO^PO@{O ঁ*IM= I}Jҳ- :%[v >MNLN}F{IPiNiw!_x=OiA!VBu$S)hP*gRۍءĥjY~JҍnhČH5ԕFBM"wWIExdcCK!$ aF~֗cO3-e 8I.R#,8yї}FolC9/8*ڈހ& : Y>9(<\Ad{H;<샮3RAꑮj#L_!( c=YF^sUuB=Dki/z Ɨ:U2LY$]_ +:-kF<s"ONBm7veybLtsMwOw9C;so{&=vFC:!M` 8Oxefӫ^ _̨iW$Y/.g% cxϋڼ}=Nܕp8H_UUen"\QbAwa^Jc/ޮ2ND"x IΟit@fw>/L"Vm̺"&!+UZgF;<Y25O{¤JAHu=&7Pca|{žgh/.?(\g) 4J0Šr3^Sv Ɗc'CҵB\r!a.0ix2j쫕Bg XXLr;\M2{\(z$cQrnd4J.rvR)s *Гl4AXj9J [=L_8DC.ҧ÷ȬN >u;L3Ħ[rCc[= G)QOu0HlI x˷k*TA3:TQ(QS]VPAYH<ː+B;Rs% X5ګYT͕|ŭ\uuEj*Mur3[ﻻŨLF%r3G.(7Ӟ"*7SAX *^j-yGGlXgڰB ڰhSi5oXkK -mYi-huݜIL'9htPLURR1Rsd4EI_Y-h+ԥ9H$oSм;A^I:(y eT[:(@5Ԓ i LWဂMI K]tqf5HΑ%^gJB)Dw(U>)3#JRDH3##+ln?Yb1Qg ֽaOdgoADi`3٢)Qİ2aJ&NP%+X9Jh,%Z98!{j;G)_% vreu Rwt%Z[&f SZY@Zc,. z &|"Wh<\. Ηq%*MS߬}Ǖ>THf_9HZ)(Ns+x6sHτ}t7ԇ9D{f5?-3CBg.b壍H[wyB]v^0T6P^6z6}7Jrѕm66'z0W:W!ixq9l_`wͧS{|>^LّRSE2yf2[_rQHF!]TKR2uA0r+-+ڎ7OX\U;S Dž">;6YT^b,K~RސcWW ^v~&SW[ߡCo}]!Rۢp4T8 aa3b`#Rh5g9>$qFg~yWڣ+U4kٽ]žrp,W&]&?A &p93fˢ[ yv/$VDbH1f0hLEVTq3͉ȠkcJ#˱TQb6  08<7*"c q!&J)ETS#);RXCPY]+P:@Jl܉w;$si*jΛj,"PL-0X d7;'x +F#qdrFМbSdOck}FXтse32[ {ΑD*"3A7VBsY ړiSbB=dh1@a&l" +'wSsF%UzO(!%5-J%wڙZJ@kʃNF4nͱ}9s[6]o4Jgrs_L >~8>N./14JI/wKτ~&ֿxkg}z73*+ǀZMeӞarw+S!>t1VMPu6}fB >"":Aa%XW5@ZЭŬҢEy`D۱u3c JX`YAŚ8RZīOsFAQ{/7{ޕd sjUVV)+cE&>ϡͿN]2)1F/V}&9> YdٰØQ[qo1WDZGB#氋J^Y_ocpgtrNIe!<&ZҸjl sԐ,L2R2aYd)F8#A<㹕 _wkLeZ%o~Q>D?PHDMS-҅Qr0qk% &q0P̱Ra. ӉᠺzN^͘1dqvwAuql͘5ՌQIlN‘K#J]ȆJ]TQ*H *uѦ-uQA[0fSj][o#7+Ƽ6d$݇M`:V"_֒'.ߗlRKjIfweƖ[Wb,~Z2D]RI+Ń{V~Yiغ7y)7Lǂt GTu1ΊطY*-% C0% ,@{52&4!K9%i( hSVq )ƹGԳ{h9S :j^DLVz_\]cIRuquR2" ًHUUHT>uIK doeA/taLO` fVm4ܪX׵!BDCRsImv*x6 {mt^ (-fcwRU qQ1Brbmn1[%0}ЧpTz) /GZq$d!p_} 809P\r-Ғ2 h!&YMoE%/ƻ:(`)!2TWy /:Lum:dR ^+DKOp$X pK^Բ%6a<=c7fRXlD4{T C%L/;R1e['atvbRNϟgv13}lzT/w1IeĜ ށ"6,3*_<]eFҳ' m rޱJ_zrԄ(X2) _ o%fO(NN3<9<L s0Zh5>zM#pm!s߫iq_-O:v[bsfuŕ 1`,D0}KLd#6nG Z`c2X!c΀BZOX Ir%7HaJjӹ eUm|>d[p-n3B#T?2C!N Rph>90$* `Ԧ#y3) c$ ]4snjF$$D&uZ%Z(% e,s; LC"c0 k j wR&m ӫ2&43,rQpRSU8XA("R4ز<2; BWŒ5jZ0ikNȅ" `@5N@a00g~@@0{0x(LBWʎ}[#90#*٧ejQUa˅4Z= P*lF0o-tP !#\ ;EUs4ާOL;lXPlHTY,T&DЀ 2ⴓX+yk&Jj%֔ =Jb(?f 1C@'(t&(V G81 ^A՝!+$u6*~%}mC8lʂ@"ͱ&AtDE|ʶ gS Z=jSk11(0bf`8@N<2݅Z|gASHdZi$pSSu^6R ;̖(%P0, f@Z5Gm&3܆RxJ4DD !D9HV*8% !&0gC嵧 %NXgW3\JCoUhpY8Ĉ\4`HHgK.p{)Zr=*a[A9%'ύ HO8 bWkiWuaCs@ 2wIkRt DǬ%mkISL-#\{<<u\Un={ S]ϋ'YM=z/g^_~!wNMB%f~[~qPκų/cTV(kxLߖ{/T5TdTj)MAE,S1>[o^ MZ?oqGuդM]l^5cYP,*' d6{+oWc'77Cy=E`=ܓJWa6ׄB 0 mp\9BAj BJqeh~Bc$R}[=#xzt0=lg2mEAo9Sv#h-H9)L=EZ7+LiTkn-Ȯ@SF4k?f)=}88{|VkʕZoO@[ye[&젣DZސ^̗SSzv tnj3'9,f)~j6 t[^9bL=[mee뤇U^7}K^7ӖL*ثkk {[]8zp-֋qH+T:(7iOv.ӞF i'x*W˿@E8 Nf5r&9$Z*5@@ O0c@@0pB58k$4L ,Wel\W6屽ԒAZD{'>7|hYtFB.*g~c|v^k;" XIu{UXZbhErys8).[0"@/Jwr'}2q AzA0ȉ@ lQrqDN+8su޳j]1;ZG ve:",ĸT93'PںNEEnA!-г/L!ZH썣RsRi+E7BAqHBa\0 ^jD8e#ٳQj*,)a%TLlKxyɿ>?,oċ4A۫$2G,(!F7y)y~`6KQl֭k|8.m,$">Yu2N;`3M[Ivo"(w翊 s[kO~Z/}Ů|Y'7rqM# pͲmh׻ [.)Fvh7ꔱ[>BwB.DlJѯvr1H16xӭNF[r&eS~=8$d-$lw; ݲIR[r&ڦK߃ b}Թq60r[ޑ7/o&}j ~0^'lU`*ܣwpqM pio}$j6|zz,TF[Ra,(ըN*_mr*, *@ilrWuftJZT)ZMeG2ZD%4=GAR[>Tru`\q:ty~re˘A1 EE.ߗIgS#t6 #fg_Jɐ.u^F/鞂;}6U xXGK/XrKv [$Sh_GPT$Fv:B+ )-׍ ʆE$k , = "M!IIhRq<3پ_~z)I{"ɴnsm}10ȸNV!\9`?8'c)hSEk4r0z_~s6 9OfY( + (tD׉n&2#R?wDGZ]FX R2+hO^׍{QeE6?+:!a_`o Dt ݋? 2!2ͷnחz+U ŀC+FK":coFKvcWG-)cT]-Ђok&/ԋ}2KM%g{ߺaD܋hu 2|f݇vyg ~ΦGYM9 KNI1VCB*v@~/&)j*k<EnP`1<M9aׅ2V#P s"HXx7iG2YŰP ( SUdڬ]c˲5{PsJ,{ JKc)"R9Kᑐ\z@l[m-DNj)V1PZbCza8cYr X%  Bq `Br *G'[ *&tk'QLtkS"Zr&eSDa5 *DrV=UL=ñgw4va!n6%!#%gT5X\gDL!U`<ʈv0N\d_ h,] z?[]lGJAǽQm"Ql9EIA)y&JJo1S(͝!ZSP̔\]JGj?fh31+i&zG=ZǍ;槣zOIicJ pQbѲAI!#j l <lX ,@ W@p6{@Owjē0VZxO^}@c5;|9\x5u n,hi[Qөk0=_{8Ox.(B++AgW"C)ˇeoONM jt iޓYTiˑCFI'R/CeR8,&}9u*槽<x){Q>׻D-nw)o[yxםOKv87ty"4 [ RVkr\Ai `kqZE@fZn-t .AE '!l\7jUE MRRвX"(;B4X9i% `R VԲQx<- -!% uIQl(+/AUC$%(:O ?9igj4Nkq$} .HX 6PufiJi.v `&K挨GQ& '4?@Àb,G;r&s+jCU"Zii*~Fjnt4a=Q jػ߶$H2]dfh#_gə]~Ք,Qi5$EOmRd^ Ax$Za%PAhG^ɭbңؒFI5qN!CYXi^g08 0j9D1b,csB$ўZ`4xsahвqA8*ip4#p`¢2eyN #z0jvВh˭2G蕋B^,ebE,\ X=q˅ٙ iiE;y7嵇7\"OCooN&~X T \kYU̕/_M 77p$6Q)R&ӓ0|&qO&?>r35R"IYtj<݇+tn[*Ee:P˕#rءQ(XqX( Rn)0opP"e'EpT#hP1Cq+,Դ3LXl D(rڗ6mN, z^k -TNpFƘC1VH[桘!$4OuJ5Ak_MT'-KzsJ;kz26!STK6e%Mieۙe)Ok g~Qxϒ*{ GcHp) YSA'U2JgJ`wsdF t54.ƒj A+fѨ`rcP FU҆s'*~$ m Xwi{^$!/:`{BYEF{baG %-`|YlT&tTZP6&~͌zlG18^ξ(P< (?ኛ*d$ %l1 Ԩ_mǨ:bX)ꜽS!#@~N|9=F wr ^N;XN=p=u^$G#h jd?Ǽ52OȘZx}pv=|լMy$y͏1(v`F.ά{$d~ 떍[&pMi'+ ͟:=:9IGohIT% kͫ n(Lf1DPwQ[39]is5RPӲB^jNXDRܶ6;1,H ]g>ݻm$LnT5>z 0^cP~-QU^O.2iz %wM'tSe6aE)E"zk3,\ge_3 !W>.*eژRC Xc'YZS.q$ 5;41W / -%)uBOZiF0~q6)HZ&+ v9C$Sztf+xHs{&~trXhx0x}u2fNFnn:}|sGˁ?m! v a{]=8aۆLQ͹έ3'%CjWaYOiyyҚ`}@ܯt;0A+&\w1t:1n Nb=wg! >E:GvRe P@@wn+ɜGpUiKk':TΧL1^So'g?5$͉(|'ȁQ-RŮ'i)M:c4S!(B\bi!YRPͦ QZNVVq5/ RjͮJqA;F`hE/o.))j| lJ [,jiͳ `DF0T`,#5Cp1Q8@$bCk:rď?4L#TS(+- j 1՞UPsmf-Vj6>Pvwٴ^[w -I)kS(j'-`!Un]ާ| ӾO`t*/Mi)&oSL!&xڰKɆs˻g`=]9[pi"@޶w8um-1SVCЊ,mXMkdaͭYcb`#wn};t¼\!저ch 0W<8-U}V&ej(Z ty3{"z " 7,ko>â6_~rwYAkqǃJGx05NϏvbe0̈Y̨:3ѿ Z)w,TmǗ Z)0v|VɈYrrVÍ 3d$R>1{$U#%WUJDPh ʪlyW;?92Õ[&CgK,$os>ؤŇ[`rlni mAccI[Dv!]Rzg +M4;o\DdIBN2j ܏teyD˘y}G{LAh@IT]H7.Y2Jp]Pj|]3>t5&dB:ڤۿk+ȟ^$tM9xʂdSe(W-1+I)O-p8%a.>: qBpO6h}e?Di/~N If&xH?0 Q {180I, B8ÝRĪ쭂P) ~5Ȝ%ͯ란mSlZޥ]87eB'(7c_MGp3W m.0Wi&bmᲨH'Zb }TVFS=˳Z8O}Rl]I?W# wr9Q%DYLJ-D*.ɕuw ZL9* C@~@WV,$GiGN`wPreJt)W/'yjI5H~L$Jΐ@\ՕR5Օ>- M% (RksmF["*;WXjEB-> @e$d.pY . Kha `#XzPbqBoth&\?mBﳹEu1Z8+vj 侣M6}g+]H7.#ӷ Xǥ%;"QBT&]BuZѭ]-έ99(a% !uimm TXjA-DODхVgcȔ!X2Xm),`9Ւq!Vƥ<9~>P:$D4Eywù%7 x }-Odc1h~&@) e}0l­c+l5F p~ OPgB^l%Hȧϋ%ow/ϑՍwP,~5Wէq/pzwB>sOyBdx#=%MPV0kh"&?r]/z<ФTH%>\\4_z"ꕋW.^07҃z +0 Sop!$k(!TSI1[ϧ3]t}sI&+'!C )?)/e|DU3$5 RV ^L=S{Fk*.l0RL2N8=y >9X```c/YM:Q!% ErC QmpW .j+*k0XE$͚q(q!*FMy~VPГ~菻-=Ê~pX=Þaqwn~vz }qex`werƀbo;X]Fx0Nlő{\=z6 ÅVxr0ͱ.sGu)6mxQ]3*wi漶+=9h>qʵ'VW`{ڵ-s+ƌ9$Dnvi՟hW, gcT31˹JoNVI777o}.y=Ą6!lo˃ =uTbI1\a-˻z2*Q=CXW+P[U1-XHBl\,<0ƐL0F nqAlF5?8I-;Ioyv?B#T94Arޒ6Њ`ZQP^XW(3)m 11Y ;C2YRO^ {9mAp^ѩD^($OL8^-rG[Z(Rc5 &"`w=z/CY͢Am0DjXKƞMKʞl)q[,>ٍn I FA]7^O).1tz;} 5mv4XE cn %d  \te_[Sq Rp[gR39%M !(Ƹ:zu"&Rqpi!h=w휋+ZYDFNFM:!.ps|DE ~fo) i"~kZ<_rB5 s{U<Ϟ{ط^0;*). F74C_{ޓPw-C0 3ۘ X׼-k9$\fusq G#os{olアޛzzR, .d&VeN5%dP!BGY(ˆ+ J $RLj x!!h"Nt8$,1eKhd_W] aA T慄\h%Cƥ$P3iưȹ;JCXfE}0S,fc`%g X2W $\#s&M2LϢF͜bnC#&T;kM܈ynD9Vn5Y}C‘LbbB7CW86t)/ȱ A#%hkUx߭/6|;:~妚}KjE^f'۹cL4̒ [5|j1WE"e;o . ьdy!nrBAY|=OMxP i` DV̒a,Ss*3)zhHv.LӢ,kV;vOf vE;1a˼~6fd}LyD[o}5d_YuxЮUԌUJ@$ n30(D)4aT(JGc°h|x󳖳A9MDJQ#Ѵ\Fh.dxO@!c?;]"&M7f݆n Ww&0lio&cfUr9yrud+9~h^WXoj40 R\b[bmz⍚}|^؎VC9! owWd\RLM+Vo*U3XuJGAt1?!bt/l`ET/h/AeQMi 34P"^f-nfCt\ۛI qX)> }( .%"}<4ո\Ʊ 9zuqEy2 D#g=ey>[>Kϫ#ج@r|[-ėH{~SĽU'X'v@}[,#9Wpά0ܬ2Tpy![ots‘W^ )!e4JڗbX@ȰVPLJʋUk~smC$qYɥ`HwWlP.)SY`=%(SM!̔(c*J0ϔ=DV&DV$7*/s,'0%.v@qjV`ú6_W>{<5$8('@BHD?鏟㬎]))=Vy+M&6mdWy]x2[o~]Yw$ue7AG&1j%J{=rƘz9_Dۯ3$ i6L;58݄h~-mVE6~JY\+QM{}Іzz5{Rvr뮛7[ a5N02'?j:=qc( ]X-Rۯ5_CXTecYޟpjL⠮ރkA5ݭÇ,"B X]5i;rܹmN+E EyC"Y@{1o[a!GVw?>@l/cJ,;2PQc+[6q~B?W){$<"@aGBIn=ޤo9@1=d @c DH+=37yޫ}g6v׀AkJ\'ūdžKW?1FBJfr+kVQd ࠀs`"eƖ﫿x#kªCKE\͖ 2 y6Tmy:ìNܟ!xZ9jk5( vX,%YT "Rg"J"#ZZ lfA zd4ׅT!}:URC\UѬ"XHgتhx u) R!KJ`H4x)NC4Xúm?_QmdV>y(.ѡ7.N᳿yλhPQu.ۋn.8`t ~' >1>݂bwE)˛c֕ᇤ75#[5alnk- -VX\ 3mk͝w|_-w&}'Z;&v=N7o C"bɪ^4xe㔪Ï Ij V suǨb=b&~| cYS װZe=vy1P-N.~NH>eo|*ԭ!E@Stu/UV(?O?~9Lao,^ #%IqU̞"(bY t7dBY(Yi'[S?+H5eSK{Պ]^(q63NNNh$;Y\trqƐ{C Mb?)8 ሎ%~8@u Na?tz}SvD;]lw¾`W_S<zHFu hV X'P8rI+tL$X|`!n[@< +3#LANXjQz\mM@sΤvf[gQ;yվ6oc$u6>'D. ޳[6YUSCcsqyk wj߭Mǵ{C!xZ?5 Z pl(~u{|yk,,Odȷ?nՏZOim +UjV4x^q cKU 0%RSJ D@`YjԞ&^J xׄBRq߭y}1ƺ'^<߯ZG4IR]a<+'BQWU>97ˇ I4"#X \*YV\bXFA@HË?t7smTMZwgu6, t>gv7sWD[*-ѕn6db|f]8(UfZB;Ǫ?Umf`;_ZAuS5}  \H M P 5Ud?ΈPd^ 5R;6R.n5>e/oVRe|͈PKDLJ,c 4Sgf4# y)"kF[UD oǏ; {uEha^QWP$*5J3i %*"323 lsM@M VuKX珟 A-ݼITz=p\gH|O78vt('{ң/b6QQ~Q!}1D[LiR-S7Z/K.{:Ï{Nb_|D`[1`^~ݡEL{L*\h?}`fNߌǎmJg}JAv=`Q`)ǚS 2Ŝs3rN [9b̈́!&6;ZOS0O):& ZKy8t:K YmC'[1.f'ѣ:]3uq%p=N)5&Nq_?v|0' :U77KA3(8t9oA>9vIf[ ISu768hgo֚s9[>:cR}]_?n`09=gsۈ c.;ć'*?`K#5bHl%qѣJo8$x GE)q\B,‘.[>5;D1)T>lDL 1ӻ[N Q!۟Z=Mg]V'b^LRa8kǍ#IB/}Г- h)sb4ْ=dw`MCbU_hA,QA2ꃲ@g0CDbBVqϢ$-~qpKArTO=Xl2%@8?iL!~, jlZ7Y* cL͹ۚ 7.푊` +T ^c,nO5.x,T "'ijʽϐAZ08rNE rT:;2))GsO8`ȑtJ/-ͽ]͊s ڸe+F#ϣY]bB w~ZXX8\j8~ :ة#t]]b;ԸwYuy}>hgຖ>U'U6Ki s=pvzM$4Ej%.1GE< 3&Rrn>s/# ɘrgXÍfyVaNՔb(U:ܶ&)Zj !ne 6`d4 i=uHFmA[&`OK-191Wjse ! Z%,(cAS5'(UM ޼"bwo^X!z/Zܮa6듀dKz 7V). 9u&8) D >g3TƔũqdcqgFn9{Uy~9&r\wpvSzIkȂ浀 k%$QnuWzOk@QHo=xeKQG14Yst`ŒQXDRA 2:']YO}kx)#VE9?MiYo`\}8G/.J]BpP2$Brtrde''rD 3^%SL w= 7să~AtB^U9z֊tX/6Bsg ȈP^|n=g D:PHjq |2b9#BY%'5DUD!h wyy;*<,}wWgdx]_׳٦-r=_W=*LM\\k6{UqY:x!=Ъb)o5罫Г[q̃l~Aƴ:JRS*~L~9kq;n2\<)+; }4=i;(q8κ:;$ u{w)7h/;QkfI+QԄ곷 ׻c@v17N˅Rq :R)JLLbD͑VHltS*4x[R_a ՊVAj12MQSv ȈNR5'J5D.5%3WhLҵo&&ŬRC7ԡvKjM{t"2w,ekUa(RT~feMKO3:oF{* Po~vSAǢ}jAިr/>FsJD 6X5ιÞ<) X$*S`#Ɍ4 G^!=&(LlzC !`8#Zք@n6Ft/#MA tJKn/ ҄`<̦&ӎN%w:4j r-23r/`h\w#R`R\n,2'=V0 XMay=J91Xt &ғ2m`7EfJN1d>~OLTD Ie)\jkcTq=Rf85ʊQucDb7Ƀ_OppC5B=OΥlL |jEjO}-apWB}|iYKMᬚ*.~zṗ2a4/4oS}ZOTlh|p1|,Se}|>0K+67['ݮCWIjtP{C\5UTmK*WA9ۡge,_x cU9ufwJBVK{{Mmf6i(_'M(%ݡ .de6i#!fbn!O7`;OO[bAP{՝]/6ħ("k=Y {FQElRCB*WD,NUWVc3Nє#~^J(R=jx_S?ȟ=xqGV`!3cG\\˓Y: عMVnLjtibuâ]-^Tz*ЖUxWGzW'T:UFְבֻb;j+gQ%[^iK)+gS-Ϝ `QP"q<9.t8z=|'#*HF}Pc >D-&jE,J `RIZ$̻7NUNffNG;/4En#YHmcFxƫr4?z"1EI=B?o_^5-sx>ca|"I??z=nI=VwpB8*,ÔcFXR+B8 M|KIqczףMuɕΣAG+a0 V\g]6ɛ&M 0OHG~}}*mciuџ!`"3}0!eynY1ځ_R9nNݱ9,M-oB\MiSҁ2EȅrHbd$J / :bQ1B|3\ĕn2UcU+QdP!i࠻ -+(:&T`;=ECZPxf,Lݑ0ua8N[!\[<Ԃ@ҁy0}Fֱ1~dt@sM6-e䳛24TvҿXsIEhM)Ao%x$ڤ7%(`ê]0ۚK`Nubm4HC40Y@Cn O1XC44CtsN\J%8Mt2I$+j8:\Bq="!IXk-Qv䎛rE!drtTY4sct@@p$29Vj͌&:&FB!gV&C.;NI8xT-yX]{&Y !-sgcY^q y?ƈb!44{'R@ op?S{L]3Xޝ7fcfPwJr~luDit,!Viο,SfFJSۡ~lM]=d},2ӴOzjXog8;Gx:at_2CĕCq"ΰZbr^5ZkHK;Z^\bQ˺xaq*=2ѱD 8O|9˫{Wn1ZQŪCfJ=U{H+hW!i)۫+z{O1caqw=šk(0"NQ͙-vdSy8oQ 3d~{N%jf zڠ(|Pɕ_?}=?We_ӫ*SSo3c!-sh2[Y?nr#{mՃo=uc gr3ifo(z!?.],g_[髇D):ƕdjD_DSWHkOR"_VWRܻ):e@6x#%ךߜ)z0thnuav#7dO =:(7"99 RQaQtEykV:R<)KSwTAw;y!K4,%.f)8`J;į:,K{JV:Fk*S 7u@vC5=(:R5~]Qv)Ic j䚟P:RVuLv 0;]9tb5S{xW$bf|&M2㛪WIĸX%(z *z GR:+B+9$9uW1L?Xu8O7wg[tƿ_ݸKn"5{,=hNǑlmnC6|z^ fmo#q ќ-s2!Q=ġZ-ix9nzx.#vCׯ KEׯ(SS{qi J[9B3PXm+@@'Ut[tLL7>^e9{?` 4 Aj xP 5xm8Hnm ]jxǪdPka$๒ s^ ,pB艤h6JXVMJ[FaPX[N117Ҫ7B4`|-58K5ϐ6Pҋ{+ pެvNJ\ hb8IxB@'?DjDrGԦsod#ZE!Lscw *beDu1iIUP%J'Q**0#ܕ/񍾬:;e/낚 9 'pQ C,ļ)@g5R+H=t.emw]Y5"QnQt 8m4\(Fhysz@,PB 蹇wʟ+*Oه ~1kp@s~w+V"/ӛ鿕gd뼑"\E?W8::W9ujDkϲ˺tyybܱ~:FdBp7Fb7\.l5Qb۪z־cдWrt|қ'/L|ڡ;0ؤMB-j6Bx1x&xӢ×Xz+qx˱5"9zl(c]@/pJԎJ޲hn5g]ڕ-\ip;޼&ECl UX(4E[j Wel;{St[WjqiY<ĬԦً2G}A7).i4#fsvLӭNPW9r[bKǼ`ɡI@H'9 7#YʬJQ m8m Sz\<, X+t $D0ob0Bf{e,"J [6 (&ٷ6F WtѾ,7 Wz;X#ll}W˭`b+->5V6Wc+zxS5?W/ek9-5\BjP'ӠdT30$`5Ul6W\xjp^|}窂Mu&Zk)HqӃq}Tkm\QD^nЩ\MX" кvz_6F3zt{׭s-UaDI@;_t̾m&+] ._OJؔZ<ُg_2BBä́0Ŗoqvt~lU2M}EFF~^p3||}r|ɬ?UC䫎,0͂fyz[>iM,;7N6eU&&m.ԩ1yޭ MM 8=^nj7wݺbPb:]ĻpɃ:eݺ3Yz>,;7%*LD& ^]d6wf빿]|7 Q@D;|zt_ʼh'x΋΋ -' U9Q+4q8\`\&:*AhwB )PFy_?>|Wx*CPx'GӛcB?r3<¹Mh1UuFxYTX߼TSP{01RbueX,\ T͒.D.Ąjt=`t}a DRSo#LJi*rE 8G5FR3 I zon% Zגi0DZFm:Hm"Bz cU37)N-\'zA}>q:"ɶ;R1KԸh)%j2V̈́>% 1$KRB3y7\ysG za#ۑ$6>L;P;_N/llYXSOu7_siuHfA\a|'8_oaĝ#QOPdw/# vyszD1SOFۻ\!?W4~'Q-6D?ODp>B1kFnz==!E40pF28]ϱ [ߞh@&"GGLō]&bR#-BH*{"la-Px),jm>p³r"B UT8sK|kRCʋpEp̿χ_[Lv̛i!u~.WԵ'3*`$;+krtfHjmp#T}bn{†cl\9oY3J=<oCmipv1^^i%|_GvᴸvF\0cwm雰6Ę]p67{)1 t-gmc!-,:'8p1L =Bna-7,YfЖ,O;HMA]o6Qގ[WRj; ^M~H]!\W J *T G)}PH)ׇ|&ɦ4H (PݺbPb:]Ļp:]1ڶw랈VD;{?\n^xNw4n;\EN̻u;PDlTr.z~t9_I6ʁ *>~ꂳV(n]"T5f*6bt)4^QAJH 1YB0DiG@dޡ;DtnZ\Dx3>0Gq}?}ֻyXp@(vqز7e]wgX82|qTE]pI^x; !YY9Gw_on$Qu>Uz|6ڻ_ާtR-:?i$.="{ސE'ٲԼ~`J Ъ祙k_ŧd=[+] (dكwbU-͉]e]JjJiޅ5*1b*yJDH* ZHw/n>"NJncw.^Sr ׁri q쨋cqzj]DuH7U4Z<[Q fW Iao?o7i;^Ϙatߨ׿W畵POH33C t{LpٻfE2!'"4ٴfV8iR,iYT=:2h&rG0O8u9hHy \ŒF'D% R. Q':.:ͭ$/r`М_A&DQcȮboM `[(PLT71Mb)*ETQ8".#DCBE-$ ; XA0{bL=f\ˉP'Z7Ld! #7)7j,|j1?jY*󍧠.,$Lҁ&=Se} ՍfiP|~,R0+8ꩋ/0HCxH$gfp-bACo ڑ8v$^ ./SYFxI9(-m(@;22xG-/65ޛ1#_&DSbFK?2h,Z[(FCC*xKh)_ 9͠.)@X $@V2њ0^|s͖1 3d\zfڠXA2^\A;[>#(Ea" w!2$Xw{W3"n(.hEX'}>kj͞vJ`|QW DΔjI`co6.Q!L@#'Y7PD{ra*>/޼Ezx6!R:C#կ]ɨT?D[+DO!+{~8Q0B *ϔޥ9f:H5.Hfjk8"[ rJ1xiNVIsa!DKlJ1wS/A~cw;^B)UY> ӻհgnE6%imFJec:ݎxN:b6V>)ӻհgnm*ؾIc W6&7?Mnٞ Xs!=s=0LjJB=|` In֧HvØ0iPl1ˁd 3/YVݳAUb`{b]y7k8[XKwj={W&X?؁}i컛yɆJh7fNzk}Dgk+{k+ig`̵ xay%(a8v"Eދia,q<1w>_30o۩npZ 8ig7<=wQ2o&3f8Q3)4nN?Qj8k'O*Pf '0Fn.;]9(f\;>PF/>`A ('`õ+ !`|lJЀTKI{B3&VЫ3MyzSF^/6jbJ>/fj{yqU*vo :2uUy qUBToRwmF'-AV^7-t/MOZBؔGMEt˗ tJ1Ļqoޭ|Duwa!D7l*JqJWK=yÞ~n[{7[TX5Wf*J:}3`~E+}Vyr" JcIj$`lYXzu9au\J2 yF99R=YbkP*Ud?&M>h?s 9;zE5,I>T*6$*2%*Vͤ&o@-|t3(O:#8P?U\c4~@^fC媸j{Nļ9D~,\rNqiiƚ};K C,Wgp 7O߶aY6ߐQxF5fOZniV=H!ȩ7L,Iߣڼ{bv}zv=^lQ{5wwf!bf.1;? i&bh[~Ȥ©0y)'I7KO&%ɀ1rxNKcp!h.AdJy`=J$훴z'ѩi rD<\֬lpW|uN.{Yc/宄9<{2Gg>~@2ç$WUu8xW_}7[,PH_|zўw7?հ߮i.7 "2p3cڊF;]D|1~gwxW?b2wgwRuPUP( ulTffaJzV4"yLab[-wZF.R$/U,P l*RIo֌d< ;TWIUzg\%5U')u~U~ox"[_3pffфѦ7ףi#*rsl'l$ vs֮s.9p&Q3ii!^%y7)WSj7w2]m,vٲX}H^W{;JmZ5]7&}.3k:(]Fλ¾}|pZB6)lhՉ;u3vI{IHtpM.Hkš:uBg7S{^zxEm\m4μv3}}<9o,s^hR2bZfDq(9>Hfd^緵j^~usr8C_NJ! v;9A$|>(iE#G.(t2@"FEyT̉Lz $Ŷ(dd:u7=0峱s @DJ2h-)i!t[v,ZC7x9{NԦr&Ck8Q#LX!0DmEg=c ʩ9mnw.+,˗vϭ>fQ{.Ia4JT(!~>Or#/PBRPQw +.!P+@B.JU_~PWh%&P@{hS~p_ h ꖀ={puZB1*HSG..85)eyy-)sTiBGۻCyԌ[Kжc8x#&a9o9l~ Ieh[*۰SA5mtΘyCwByLBVӲvnuM{ge2MI" -F{fji=-:<|j0/606Sq-&mn)0+jkD;ȺZhCFq>7a> o揗;QxL<\*&[gfJxоedm-ll=Aj&]F2>eVPc[=>(OIA6? SqΜKeƪD R0 *6 gy$Fz:jJ\HQ5'g.!M?b:N֡]jUzZ" n!9LeB%f:[I-[FTk͏B2RrJJZV*gj3Ɍ%7ӑ'S<0v`4BZ.IYaDJq-zci0AMDKՐ6|xR,ՐI5k.h"v:nM5Nw:V*W ]/e;Nü7Td!^0wPB`ZN ^H/TB|o`D 5o8aР&!LU5Q7&m<4L`5Lڿ"'79|b긖!M7A}ɋ&CAho¼Zȼ4aA"]6) ᜬ޲a&y .o-CǗgOPP,Wqc>/Ň 52)dmWkC<\P/ϯ|K M zB(PVx{$[dG5Nhkί] HL:Xfآ. &1ox>I䭈q6.0HT qƭR?crԉR¥|HLHդҹ7z* U7jײZQ mt wl7a^dsX1!t(%.5\VStlT!Oڦ9ƻ4aURKv}7UNf2t$#r/3nigH0M33̂VB3b vˍz@=ڃE@ ]eS ɦ'^;K+mt 8ȡU0<eaoL RݽPAilG07f$rFw;ir^ue˓ |`C6ӷ@yQ-kù`7dTQމdZҕQ;j]#.@#\MazþL؎(FFq<} K n#]nX 5ţe,_4-Aq]ε[&yW[(yn]vn?(pq4lgs;*0mi%$:fƱ寬6^,`at[kD1e#z- ׇǏ/_Dt?Cb'z(х[n>sܶaFzza^<ì9@%FM;AMfzf4yV;mMZhk !蜛<[/';7vMR: UBa`Ґ=zbGXL8w0nj=ҞϞ%wٸ*sJ$]?2f`)?>]}-z u^̋Vi FV>*0v8ѝ`VW7Nt#+rUG<Fc~T"w7Ѓt@m˥_| xY=z=hu.B- 4ZE>P.[oGnz7KpNlzVu:D`|"/tڬ"erHP[y,Q:/uI{*,q}iB)ޛGM% QP9ʚ;zGbX:>*@^ѭﴽ>:h8 |t;VuϾӋ=:eQ 5j(1sb={z<}t(mω5AQNp௣eR16}lCt7C&5y&g;k̜k7>O.u(y@߁4!b =Ѡ g;ٷh]me1/M~dSLJj6賸m\(ݩ8&B./q@-е+:).b x"S-uz<@:w)\Y9-SMf&eY$;H%JUi2mect΁#*s ,R'QD 9d9 a|>SF% DHH.ǚ9eK F*ҙ՝ @1v6$ AT6K&hO*U}1k l NiA< VB]%ZrͺdbaQ } H5aRkyR+"2"Ķ2 bN5E5@Mmm2m∭+Balcr;\Vxd6Hʐ貑_)gt.%D;_jdiڸΨurDu22!ɏ FIDMɈrM/hPfԚC2kJV2r)H`pG~t$mJV `dfU Ůp凵,PD EFGXM!" %vؙ'u +V5D%[1H` dΗ6YFI2ѐ64SKd@+f+yYu&)YHU ۓ Dj)n+(9q򋫿Ӳ`JW/.fnlXpyLw./pѾs6f\>D?>{hxR PMŋsp}#KjX/m X'Fr$,B:9<D< K=s()ڻ$ _!ʃɒqrI'G{ްXY aH܀R RA yB7}>}.]? Ǥw*״* &i.[,oF\;(vH599_)ۚmToƺ]M5쁤nrNX~Ak3! X9DPBCP]e ƞjW"~vb0a*Ɔ^r!%AiYVkjxLV= %%ve#(|s-}4؝'უ>a/ 1[*XaC m{u?<09]s4=d:;-k Au@kaukdLT{ V^}i,}]8zf͉=ڞ^ eLnQ <qtByý(1VByLo/X<.ڭQSK8nK@y\s<&!)GyLOæ]џh; ȰF´J7g`a.L4tJZf^cM4iy,nr^P߯Cz/&wTOcҿ'X1q0i]ؿdzhݫ 8"u\L;LLaO8kIRf)bk՚ 6_4OM?tY{; &b7 B*,L{8ҁ1ޫ"y._Ƀ%|w/ewBk֓4kxqtKqu Wz?3DtSk=YˆDcJy0. vbx@TE{yaFtt)E[΁ߪ7ǘxlж_>~H_(*U\ ?w-h}ׂw-h}7ZoH?2mdZu%({QgUr&d*JU0 @ߚ>}}W&qp?;ro~v2ۗ7PF>5h7_ߗGC^Q8>!3n0*pli43{geAWQLcZsɗfȆ}6(?栌IvEx.nKZE㦨>Z$d9[3M!Cƛ 5鰵SyEe} ΋V̪\KμN < v3\n{$S5]9}LwR[$sMb_p*6JU;KTjDw-^(k۞;8 t3b.QR=68jplmݹ5$_2OYkk92.ᘆ~&Dg;׊+Y[ ZX*UE1tq^ɷfnHjVi%RKfab@xf[K62,Al:4Ÿnh b5 y@fUOO@K )ʃEgA՘$(e1QRW k( 'JV!csmҌܭ0ط[qZo/zQ(?J.v.]jRuX$lZ-24 o.NRhq }_ߖ]bWЮ{ /ƻ grN.7b7ӷo}-[ý7b»8ͣNWb[.^ε.N$K']]'4|yiҼJ.0mj%.^48ֵ&^DmXEʍkߡ%+.ϒ N[Lu&Uz}[N>ZQB눣4W }qVn NVoh "˩r;瞕 r"@/'atXڲƘť%k)ih߳gI16Ca{}Y ]yA942]'5-'nC"ڊ)@Cc.5Ȍ6ض!.1w(DTRv[6A,j<v½֋q>y6'jtuMX ଉ|64L,6eR-'bUÌ9lBs)%4*G3iY.*ʡ1/1#5o@iXC>O[6Xk"Mh:(|h( gj:)nI<j`V+/%zJ BdM.bCr Ciy<*yFnb[C*O jTW q%J)*Cvwv>UłC^e [6t, R,á* ͨuWyqg\$u]N+Z9JeDJY,8yKgϞ5FMnt`p )zA!c ]D)2oXZQVk|UF;N'ͱ񎲰%p^m[5:BqgʥthQI){v+_Q=B,o}j>_ܿHMtv+0Qu #'jvKʽOs[F\Qz,GZ'TѫyֆAjhZi 6nÁ z{=YH̃5']F1ѱZd„$9*):Ը;O,kK&|M`"90~ڞ姩"Ʒ!PB316lM{ŵMgNv;gQ WgG49فvP|J+#CphnZw+_I!@f-")4 RSeֱKޙ6ʥ,Yh W4?¬aC"fQ:FhgLb@APZbtVs>pe}exJ@ aqkȎh)BrzW6IPZ'ĥFHL6(.2ڀRW)rǓI Q5wRI c%]2`C䟧>Ւ8 SqFEcW{#xj(=.d؀ؤ Ih}^!{բ8r2 NeScKf eSl&vu_Mnue!RƆANU_ʇL'!IshÝF^L'qUH#RM|5 (49LG8 (:{ ]NW轺>-uh^U)oӟwrϓntC*o6rY)0yuJ 8h >#;l+ 'TkHa^ƔTSdo ݷyKX+zDtY]_EV1gVᇬ?dE!+mEؾ^,8X`Ю)FF$ϭN&2 62 R+#/y=^]?//6cIO\E.@d$o쏸)ʿ߼X}+үBwx1Z vyrwS%vW!dM!SpQ}L52`%#y(ͣti2FK⭶уTs.9օ͇cRL-FEGR:#P ۚ'k KF Kֶ <b] ~!3܋Ƕbeg%ZD ؃l{𥆓6BUKJF\p>>1tpl`CҩV0Ve`Zs d0(+L`%o88L)[<R&؁!nhޫ}I(ʉeIJ9=ZV嚰ш*NC^(@ZZfܘwCr4)@sOQZ1wIIb& ưR4JS\,5'D(t~`eQ!.j0Ȃ;b*=ejɐ٥`i&EO*TKҏg.\Os=wϑLdm8N!˭,xj.zeZHr}|(t _%]+QIRL'aRTI23a+d`Өb61 0Oza躒*2lԮWK]k95AIP>N~)~NqRCҩs[9t@t'̹@͢).Bj0#B@fuM%c~ni#', IxiBAH&͌H,6Z)٭P}؄xSWUW)wmn9Wu/>9{6⿯C5= wosxqstz?⊼w1@gF[݇WT]_' 䫿^~3 kkI&;5V鑽߯f޾y` #ﷶ8-w׫Mj̛)D+.e.SпYV?էз1|p/[OyJS*A!{TfĶMMjL >YO V:/ϟV.(Y#\phr?IM*œ?b}_}d8gt~l~5%":W/(a'* Z}&㎄?۰*IH@6Q* z$ӋWlgT)Y_:i=JC LKd=hQPG"{Y, &)A[_E \4 2X<2+:@1Ba)Yr5(W 7v-KL<l4.#cyWQde֣^m&_}`wɊ+uv*fr;Ix+к2Jh Xm m p`%ؓo8k>HtL)$,Qo?f[XWu; USmfjqCa׋|;ܐ蘁j^ K! 9=a,QZ:OJ?ANi,G5^gfe&(8qz<| N_VNZ@);5t,ŝ6OO.d ѩRVszc'qj& 'u̱̱K`d,;|0ͮ 9VJ|d7fXR(Vz{?7?xgs6sԱX毽yX{Mu˃vS=u*nCh7uJc]sKp#- >7mŵŵuLpn8^qm,`_N"B/IsՅ_.&m2eRQ ߶T6S[}cuNek%3%.ܝ8X[9vy@gY(WbZ+p,\D 5:ECM"h>Ax &MOCr<Ā W<41eT1џ/Y8?T#tfNW4]l~㏻}Gm/W_*3cY.HxS䱤PS "E^,^BQ'ʀKP5d+2#&vܟt^Vqڠ Ռ :D\~sD`yx8(Zp/pZY.(+ "o{9~HDG5Q Dϲ/v쟼Aq (|%ˀͦ- 3hNMSEWeLxH/_[NtHW)lꓰ+Q}Jh22]k7q&dmh&"%QjLQazE6 eIjϼ u!RX+a#7.5ڻŘ2H*yUҙ\I  6k,o͢ب^qV6B1ױ5HF`2SH9-ة*HPqlFϑۂ?C2 .3DT3 R?{WǍ /{TGR_y o1/P8H, 4[rOjT7Q% TKJUOI117A@bX".`+!T1#Qj<@* qѺ[c*k8rO!8^;wOQ 7Cyw d$ch wWHdoJIfLDD#LtT׌%lKn6h̔Ĕ1󩸑=_DsxL}!`rgZe6@^ [ڰ7 .5UD dӨzZD e8=,uC"&#+RޛM5& 1yL Yi Ah:2%+f/O_NJ<ߞb4/DkZw.5_29l'(!/UOy귣<-<䕻hOq~Q) ujQ'L'$S/Ȍn-<䕻hO1h#)$QaW eEDa{BaWWza7W}Qr'C;_T~"st+?-9UlwǨxeCRf 1Ts*ˀR>N|y3KqIŊy1}?o[yɛ[xbh䪸Vmzԅ{>Y/e U>* &Va $J 9ĺU LJP" u@ieTShw.'UD1ΥŏdKHHY,p-;`u&pQos񿄲m'+9\*EE\#w|YxKg@: ORq)NS0x^ߺPX Yf {}O0GF& ;%`dz!WA&yCN3ť 뵊OS3`J*cցg2 xS2>m 7=7(w"e]6MMqA \`,h9DŽs@$99(Ҕ MlX1BD+J A2m0 $ 6Ԛ՘(^oX1K>WDV|wD+2G^zHIz->: RKO (c! XY)SaEԹۮBdC.Uq5g %l&a\'̒iBH$a1 !?(%,D1^GqTIE6hWBf:ȶO{ܜٹBS:7an)2EsPS2W%LƉ^!'k 4Ni dnqFQ>*!$>Ttdn ftHo=Z(%]2Bf6Є%] {e9EUZQ@Zi^3"OK͑Æ{ ZҴƴrDTqfr26J1n<|.,yu9˓qᕡo0:p{(n^ZU>Oe=GĴ nl>mu&J y.ZSLo3L1 ujQfGLtlQ2ѭr)zlCzL$[ r\%mΣ?ٿ8vthΌn-<䕻pd o󅑸 𓜢jGd:z6]tVhy&=E8̉/ Y |I2YR PebB`.$ō1匰 Yo{ T\Z\hdGz4+}o!UcP?e\= bf2ڭwOݠb}k9&Wbrй`kW ѡNp6!!BLFX eΓ*pq?=Gxxr|`&Dq&,RL3 .t`}OBMe0hԻ2]*  XFq)J8j,pU{>sX~ç,v٪ F%GU;^s4ӛIdVF15QC}Tɀe^z0R@ pz".(%YWBw9mu,._m߫lij&e\0#dH!2Hp aSi!c AHVF*+725WPV|wP+1w~AL@~1Cc< {̵ Ҫ%3Ay5cVp`8CƌYo6LǰYCE\QtI~¸\P:.tR)<Ό BPB$BA*I\acE4h>Z|v6t~seFep ?lG8~od ÷!&o\A?Ϝ$a_|S1rþ= g = $e#yP;sw@!MYT0 ˒.UY=ȒvAP;X* uN @t`LAHJFSi|/֥ƘS~-9Lbk@)٢murJbDehI&wyxd]~(n r*f;RdQح#<_QyrUWTlN3ZMj!?$~P,{)tΝGE&Pje`6rh6e0us {yիxԩ@BAMaV1\j"莲jVQ(p3b]j؛9ϵ(a!/&LGf!ρ2Y[x+w b_7pjm:(ne_ S&n2[ y.ZSR}Do1 ujQf0&1,_qόn-<䕻OLJLQ֘?y;Y&EctR+jg]mTDPZ͇Wqn;n?t^/_xVnOW^E R﫫\{cA?K|wҔRB[WE7q&|(aoy(X{g/ۨCFjg9|T}z3NUCKssp%/J҆뗿v&OO'^%Bv4BD6 7w7A]K쌕UUɄSayRs7Qe|GQe0mL?fD 89Rt9 oPVwr}c(ĕN N;L0 R+ 3cB>uMpxnH`Itk k 8 cXy%r:w1$N)tD)L+3BIXy L絔D XFBy=Ag@(z1<0sFh<[4i9֤PgP95Fh1B 㮼KGE1V`C0By%9o<Ct4O+CwHt{R H{Ĩqy"wii Z3 3B  E$g=(;Ő!h3.U*HR_K ӻg "Ly^ݹ*=dp4W5h;d_B,@j; [ jvH5-fd [ hfŠf'M`vAAK[×*\G/+oW}.fG!o?VWgۻu2.whqչ*fLY={H#TNUt Q;N:Muc^kcd>.Y_q r_??'Ku&<6/?T6~!CPs߿9? w+{޶I1J(/|?"BqϮ_W # )lŕ4InVߏq׶"'`)bg}mmP$]8a_dX8@*T`!P:ZبSfJx̜Hsc_!5$}Dk)Y1g PPn0bNR-VPS [sYK-(" wR$4!Q0+ф)wQ%EbV6ӈ@K-b`/߸Kˆ{ q@TS N%I2TpAȱh륉UH-%J/e[ab :HFGAGXǍcG_$ua<%ڧHOjNo${Z=J]ji:8McnGD ./u[RN"N7,ˤLUM5[1ϭ7ѩ2kRی-=x"B\լ_rK"pE,a۵q?VxO)3Un !;]jQΝZg Kk˪+Gdej?ć%}Թ-.5sf2K! en1Gq->0B2r4Ơ+M->`U>R'F>p~˦P#C6/pɀ2?{#QkMV+U?OAR--D1W]%.j\v >]VwukW 5lSw6ޣ*3%h9tT\=G)VƟ=\A%&hM|ښSxp3E`&+dzvu=):YP8om`r]>8zboNiE(Ea/FqS͊pS懣CV86 D!z8Y0%"wwF0ck9ڮ jS6ή7 oCǪ}s_SoPKim5~O54!=u]IZtؤv}Wow5{+YAS"ƸF-ςugz":kLKy-^H$8u tvW?e@H^휛.NvT`u<]f;霴qr7ףň6 Ka;7Lr]E(܂Q [Fu۵o>lh/@ Ӡ`gEqe_B1f/g! Ƥ{Thwp 'Cfp4Mt &둘q( 9 pM%m#QɉIRHuI yZ9Fi`è[=!W`g-;5pUJI'- j.;t~¸fŒCdږ"UCdUT18m(@i$5mSYhTEoƺRS 9o+JUXeQU]Ml 5e}9$3gHl,vm "r8x&> (eYw.!bSM2z>\d0M05ob#p/ƺ$-Bbg{T\QuVsL*¢TPZLM. *lPˆXXdd6A};a ]aVʖ6#ek).USe*0˜Ċ_X)@TBR"B'UUKJWOF#?Ú4l0cr%Ù ;)p=Oо np>y|-Wr؊j˕AjcGsɑ +:^<5:lOjBZ>Ŭa0#I(D4)s!67xИ|E`'޽׶[z8r A{kQJ񁓟7q/%͟G(a!%vYO5:P tm7g?U>7jMybbev*_xgR IBuY@2߿z%}w~uc!oxv ”ui ) f-5+=\ @/uKlIxQΏʐrNj*UYey7ͭS@4(۫|\ɆIM:7&ٰE:7mucK-<޿x^zLUWjZ+K/Ku:E)NсY9U䒽n%z+$4/=.+jgw^z^&K,uS$4tz@9k@1邗_&ʢ2xԫJ;Ce>n}hFigؗE-J/Lf&DF`xv.lZģ>;;e㝠5OI{ G/t#^m^mWA^mH{4j s8_`?1|W196/QijyyLlg"=&:qZA,OҠN2~CImS j'o6iCm&Ou;5qP֟Y`V2HrkJ!ڢM]P Nk7>鉶Zn^qV]е?M&_#^O]O>f7f+/+Mv7AD}#h%q<4z ]퇟k͇F*z`#ƲѶ[Vqi_j-@y.Z u=;t , JO?]3 huRFW_~1.WnjLxſf\nNJ}9|OtKlBfϸYwMfUݻ>}ZT׀bԋC:LZtxh(cʺ h/jp(-I/ˣ5[!_,,ǒiD'$AeKPr#f?& v7"(x6)AKY j:qQ 9%_ԡy2$pVNebl@,P ܠ"a><%9L7pKyK R`]Jk'z,e- _n'/U}_=]_7.O}w>^ۯn/K~\_EmZ~d1 NcdWpy<꒥f +:29]֥rhʨPaURVV|.0x:5{f[g67)<2ɳX?vPΒ} /?|j:{oy>Y[MSnv: Hk9=IJD<{\4s27x 0Ǥ^u쥗[UvKg$b ZyeA((nEꥣ5,A;QntP%8ÉOYMN J; @s9'Cu*BjCZtpcQ eݽ/VX9םLGZc;cRR3E]7a^j~""~{8XeJ_ϴ|N$ҲTDƩYOޏP[dNO=K<5{U ץ;P~!hF#8L.G ѠӶ2 Áa~95%yztK QdgyJC턒`K\Ssu@%a9xz_ۈ;!dy*;j?urBQڃGk53UV֮r|Iz;*o[66[$#߽?.+;KZyf>>!T =ۭ{_ 8Y1C%]+7H-t;V^e$ ֚ȥvz|*A"xe˕*T-  ,[R [cp'z%)B>7v/,:n(Mk‵ėTmA$Yܽ)(l<t5x;!p;+b|^vEғ'X8_<F5Dv"4Z`y=F1-6klŞ%IY u !R\["+"CgK+5V̔A'Ze`Uƥ y⭅Vť8JRPL MDMe)fPVq3R޶`2(UQRU!1VjOjo\&I L*Z9QVThz|G}oCE+O eiPyiMh˙Ѹ'){4-d $]@sDW٘wK;W10~bFq:|Uǥ^usnU~U; DkVa9ԫНԇ /U`ԫ'R[3{e{)s2/i؁I4~{}A#bnA >K2J}#5հ>vmW5p#I|d'Lњ-}Iw9Aۻϧ a mexá 13!"F*?A<(さ%wfJ1D)Y3pbc[o'@a9,kGk NdCZݦI>WFԠV<:J^tP 5c]`ahR'HęHʗ[>)sn9Sƺzmqx_ 飖wT'6a5k5 ʈ<*A,ݟ%c?{W$G\72p0'YYbusbZ"ȉ* Umm\s24>Fua#9q}FiXW Oَ OK*O$ƈSL'&.*.d6Qc]Z,F! 5a'$Dux2RuqmzC ƶPl#^{[1Mx.-BZU*nJm \KL+SY6w;Tޛ{ƥRe{ﺬu a _ /o{> >$2eUU)W~[̘aSRͥp*Hpk(TP**M[t% %:.}cg_Iuh k>RZOVRؕrdUxY0)'0YrnkŃ P'w}2±@OzrjZK6Xde3/U G 0l.~H@Tm< Hsu6F鱧HV WtQ)* .QaW_sK߼ &:gV&D~.!c=FM<ר''#Z7+8\?D+8<==;zI!QtA;5=]e^!1r@$dBI4sZÞuv |`R\̥IϺ9d2Бc=ok4"inH,O:dh?(aCo$)O n 7Ucyt{/$儮 H^P/E$.GD[YNt/^1?:]K׉L,6!ExWB{8Ci#cw @s"c?]w] 6l[n!_#2\T Pi %sYFG c4{~ >"%j1?+bH7Vؠ4Z0m)E.Q;\U[hԢvVQ!~h}Oak= #6vq5$RR)O?:QoZ j6 Ra71k/rkUUǟFvo:Gn Ed$ָX2`U^l5֥ RgѺ ގtvO>YNPh=V'֡YWWCH FS]h12)# _y]w]p!˪d0ToPFHڶn B,r!3C4d~xVZ'qe3^H ua 87a FX4F+2)KrP6TTdž(5 *(20PJHǞI L03Qy\WA.aepٽE=+N`&FC?>O)O=R팑ч=3ZV К?rùT:Ʋ-T%Ǔ.&E:bUQyIvCda +[8+чN&Wnݐ|[ǂƦ~%g~v8ۋIN<b^@g.S9G[J Mg:ףN4Ggg&wj;?;^[.N5dC<ү첨~蒟7K^=FJU2jAeñ'3F"&!~] tl`F*M}BNLUw6)`5{gCnҽI8UqVMY&}1"9.kJ59a?4_8y[[ u5H;-;uC'L /QXzhэK82=ۨ{(X B{W͸uFdfЮpWt-"B͈}hh&4,MM7L~~rpSv`LVqdo+PeS1 Yh6LUWXQQaez1<7hc|YnQJe ?|GSzģgd du׬H׼jщ&ZEv t_xM|jd>9?j8v1^VkKj:I[oSAaqRZ6`cǒƽUVk윓FELiRpÖ6Nڦd5uۈԶ9jбn !dQ,wxO< \%}N+8 G?Go穒1e2:` @M8NIH(/kVȑXK^ak'>$_H4X{shG׳Nh+]W;cwoOwwQL4S T[W[6¬wm=n#WzYLRW7v`g2x Ul+QKtCT'{o &kRip(\YM"`zda/cO>*[#d5!!BƆAj1Jˌ eL1uգ?ܣt uvWE9HtNY:V"AcA^]9QLӫz{= zA\94I9[Y>f[9R^`G.N-+9t(p DFM:'8] ཯8WW>J IջAuyt+8.do:0]= %uY|y gIo2E,oF,ڏ~k#1j2( ˹bPhE샂#dcp∈TQ rՅ̃bL'[NaWx"?C2IW5(vy>OU f~c^*Ϩ\]4Mrh223ada 3fKr"9* :ŊI"FՠcԃUVC\y1&!$<~fLʨFY 6ɴA'=X'b5 itUOm]33b ,~Pr;I3r\?0 ۵ndE9j=r^ 1\f:c{ZLN:7*Ew(CՃ.FܖGy ,($HE ͬb')j~ƞRʵ Pg .{%LF੽(8Oć+ ]U]^fM^`]ﱽ+Xj1ZTQ*U T;jC.~ձcX5xft~\/8GPlP0K9gP2֒ޞ @XpëzBi##Dyc)WZF\aR¹#bu7({&B D[ST5!I ̣W2'G*c46؅SkK֎i$)f2Iv=RW%"\đ_GlIi11lcGnE.GQKL*)230 v2|xsWv4TY&Q{}=$r1{{^xRRԩܛœ!yyd.^tzR|~+_3}xM˗2˔Ў\mfǬp: ɹ "]'>*Iupwr? H]n5#:9 + {7]RQ]$q L-[J89x{&&Wn>DRڋ 'v䰏o!J)EHӜd:R2Ҍr W~'B#!N)$3-'wmک^\6TC| E-*;_Wژ "?~8ʺyVb 7O!cGܐ<Ϲ@*Q98Hi Y\Hc apbZ6&8"P$o󯗄Ϩ\?1thf9ȷ^ 5 ),o$Ǩ5 DDIk3)୺TS+!F 43;ѴlHV#nN?R)W2Nf}h#(]g6Loon̾"3\ 1`Re4 4c`s 3XeZ~,!g\FG3.5uFcxzv%3\Z=X`K_V,~חRG*cբ3L`~y6]oZB' <<aju<ͣh_-,bݕP9ߛxPY]9bUBşQ`Z/Q398 QN{8b#|=fӵխv={?YЧD<QvUUQc25'ցNiq+;w :/Nh HxKzj2白s%ݦDCHj s&$UJNܤYe8<#L*_%M&K.sԢr3C1I τ5TB98%PiHS*81i2Er|ٳr;baR4f(0ɘʰI2i$ń ")2n&(@,:xU?B RXׇ]*5$bRLUmKm^f63R\OE 'mw6]~yܑ@AglRݺ멙9|̀8tbpZڔݚΛmy ),Q ?=+47,䍛hˆG?Fy7J)AnM11Ϩ.7rٙwkwkB޸6)xϻ)\7~Ai:rKǩyƊz6,䍛6%V.煟ΜލMg؛w {(f cz׿yJf7Qz~[{߬<ࣼnNJ!ewυ& 33G+C},?'1zJ6[W>o9~=⻊}]7{}~],oon0!÷R(ylSqo=}򗑕rfܹ~|Nw @VT|Ļ3{fc&'JgT[@1vrA8sދѫv!zrA-cWo@f^ >SO.PJ|zoOG8@ ٧ۧQ=X'|cT??F9A_jfL&[㌸d{1bO [!B49A*p݅gXD J01Yw-/P >9檫 #w ]EysŚt݄n5RawVީ0;gZVRa1*Pl+˰cγK8*9L*%ja[DdWHbb1fZ<1&E -CI5ByWKCbsuPn=UpPp(mչ^yPxDW]`()cv ŒUvM)<5CyCnm2xN|R Tv~q >i hxhU؆'_RÇZG>g3\KFT kGU*TPRM%Q"%(9_ JJ~K6%ߓV7eŷ 23 f6j?՞Bdo0:_1柵q%jEHxHS=y1(B^'8A2ұ)ƏѓMD^6l,]Oa7c->OL UB=u~`HlYݦta|4n4~ A jnfn]QtxQsNyw8XHd s :HAf28N u>Υ)A*;\bP6^<{ I1خqEvw_ݓI/+~$hwLz/N,c߶5i7!2!ZPck`k`#ͼNy<4Su:ocA}oy6dzs(6aBP,*@[AJ֕b13lƴ)QDO5h4r,[Lo⯎ -EevQz¦.6FNKxH N=o'=Ts:kaJ']R(No2K7+ ~Zxz9 ]} 4(Gޝ22冻Cő'o.4@'5k";՞kw [1{DG)PC"T|2QF=ǫZY@zDmBaCpwgZF!zA ?Hi!H컯LZkܫ[n p1F56wxGٞ,LMjcCe'VNUoWVO+^&Z16ZM|%r?9x '4Pzu.do!jpC {<v{dTзPP)SSMCSèNLZ}>` ΍M7i(r v/d>TBz~_ƙU/n~Od n %3~r{̑+W[TRA ??_v\E!Tfgi ~?#K<&`m̃P=b#X?.m.~:ǒ򰠋0x~oŏ2_d J% ey0ܗoa/?@XO[_nRzS]\^4$!\DdPn<9ji9DR0ҡvKNn H;*M꫓T{[*bD'u:툁)Qzլ"}ElJݪf^a (ԨCP_6$TovEH $1IKTt?VF݁3(ŢG(e9h89x aT;o֛86/>W韓[r7CDRIP* 3_1c6dK1-мly$WP0 ^fG ꨢIB}'^W@{Tl- Vh<\J*ev<͉+Na Tqmpn@̀R+XX: 2g8s0Ȉ9Xq[*[Uź}J=˜vYUNa83yW@KQ.˴:gL1HƠ1\p$ϷYˁ&lIMp gxes9-d?6LJ ,w#+Q1RM5Z«5Z=H5ZRY$tB׼5eEΔiP6w,ZpaLjZ&Y=ǂ=V9j-ԂL@@J 4NwU`z_/'C>.0*NJrrcAޘ,)EW$/O)[;Cu8RFJ,V}tn)H3Vw H`oWXOEZ̋מ)v[T O4}V ~?MeƆ<νx|:St.{B1/˘u^!VXRcypVj:,`לKyqN]= Yt& Ձ]?jng:9s8T`QR?jI˗}-Ei Uuq{L[c&$䝋hL1dK1{Q!1:GvDc*ig-F+R5!!\DdJ[ڍ^JAQGӋ`l[g Ej&$䝋2%ҽXZC?[?g6IZxLK;p!,WXw멄q7QG&ށpt(&ue+ܳX ;[*^?I$_;]߸Wܺ&hfP aR,1GaDtV\ESo r%RBR6ݦ]v}nn]d _W+}|W{ygk(WcSw.7a>O7CԤך쫓.$VϾ5Ҡ}+Nodr>s޲f|gsoo7'& t=Ƃ!^|~L |L=F@Ý Kte%滫L˛UL&U1^,zVPFuX-s܊Y;FA}h}xӝ5刖AxVUֽ֗Z}7.'qry3:}ϭ#6,;EcR ې@5 y}a=Hc@XɃK'yՔRW*NP8J 롢tάle=ӇPx UZalSWqiP~ {1BX+ }(τLfmOnҟOv.qm#ViM9>HC )rYksyq~"6[eu}jyddvꍏGaa(^l:^)`+xO u/d0"FY}Xy{{ܴdǤA#m -B?[d4j ;-A&YېcèEJ>ŧ':O" mwRB.]7@Lh,Ȧ!WpN!mHT$$jP0$'}ܼMhӖC),mTm.1W xeՋ_?{ 4(y4h43ϡuEoߏ|:uFf!y8Ϝ #9y.-JLHcƘܹ8. f|,@/W-Etb qDUF{ ?],nfsq<ͱ$ {T_7o; }\G )oozy͋g?L"0Ϧ߃>\}p\̽= @>0j5] ,>|n@'O e*ˈtͨFᎍ@TZC2eUu b!V(]z,T*8!w.vrXu1ERVK & }a/Ɉ!)=\3RoKJf^>DxX|BׇyV0˿O@D &ziG$r{<*P~{cumI,sϖ/у>rwנs⵽M/9O#?*wE( 0luç&q#"_ q#g'<^v)y@d܋sP9P, YU%JY؛Ybaaj= l!Oǟ7Q(*x p1Pb_.OZ-,箝Tx9U]l7I?hKhb餟jۤ^ 'tEl#9I/TqVB?x>_VU :cj~g mafʡ\``i'3T}n/Q n$H]tZK*@[}eUY5 Ďhe @SZ 7Nf" 1FF: zlDL<؁iUF2+k)wjAH&ƙƐqy$*:RK^xR.߀Rk\D$ňH|"B95l @Y & ki⨠pbiJ8iƲcwR⑤JJ.KJ!Qb;iH޺H0WL,T𧽔;_S4I;W $>D0pF~h4_y(zx%ifM[on 1w4`i49-RG1Ig"a0I,`f7/27M@s6 1/Ynoda1ȻY!S%[,9cUhZxqojFO Uu5#EAs$iG4P^s,ԆLJ[3˳B>_t/gDٞ|siEO.-A4v' ]:;t+H"ٱbse+_t!,J}zP9u ڨrcvgS-BA\m~b.lZ4 zx_c[X'bPoMfO7n1*i Bmx_a1Ϻ\(jFhvs@CSXemujU.ϖFڔO1Ա\}cU38?_j* 8Η1ɥ:Ezp_\i u ;\DHۉ,"=+vNw7Ӊj%x'"w64vd:}]!lqe2ʿ܌,s1fXz)UA5Yu͢ceNENzy"cObsPfR1SrsWQ א-k,dָ,f!T/SE9( ?/q y-) r7oY7/~AVV!)Svi\[2֭ p8yy#ZXP7쬽F^LDse_|]sM40.^AG@ŸdIYG^=djWhdC‣ݩbu܏҃/8NS0~aݢlcXa+ĭ"F#[1rcpO 2G~7\W.~v:0@?:m8f `&;}%VbgJ{irl3B6RX5';dJПA M;.ohT$ G [+ 09ߗJD H$"2 o^` |W;6~}V}p ;&_7 ǛjoR|m*oSzf_%P7x(g:Nj;^7K c)"}0`,=$<c0b/JMh;^#K) c)}ø,=$}Qjrq;^KY`%.6RNKP+>e,20[If)a,س3 bTjMбYJW|Vnt,偍sEFG+diXT?FiKâ )uXz,E<,,=$}QjBXz,%0G1',%0DYJW|cŧ+~*5]UTAs>ρhKK}_E>Lj{Y,"@Ha/8f2G)Rk$#E+aNac l pF& AOTjEsGQ`:"c'=~3Q9Ax,f,!oGuvXɾS; GoY[8Or/>ULڴ{Y憉bǬRLJf Bj|ԡ۬e AmZ;D_.;W~źƪO)~7ڬr+ZZ?O$wիTsE'gpQE9-âxáE31*)~" V7z˸_6VCd f{8diZ mcC8i]y ݕg#t8st-2Qzbt%A A(ݝ)*N=E G;j4U9VN{:5( y2cz wm:bR.١FS_k=j12@ DF"sr86Ib +f9K9}2ۡUΒ kiZ_fcYሺ4fg )Bz@97gg(5pA|G+m֘I΋ k'mU"͝l+u1*=_,ʗ집feMﭛ$(mx!$WҐ;%IP6,ZXNFt.> oJ.?æ`6>E{[8$_UAR1nkGS5(4Thl)0Cli sZqWKxF%&9 _3瓿u8T~-kZ1}#Z6 QvU)=Gi6/j_ʵJN׉z:.=S >ge0@"kl|֡Izz M9פE躌Ze񎙭I_]6j;B™<Η楄.Ӑʛz6ФZXS|!4,#6aû7?Fֿ00BNɓLC'?ͳF:P(&FD)FH$!C0cX&\ZY,iZY<l9Mb ? y7^4FA\X4旉/R3+x>@$ 7l@l*]a_jזwjg^6ڔ4v~U 0MON9aZF_‹oA=!\4yA#"͸_/zJ96;{]` Gpu!j: V,= (mhMk]=PK\']B\Oꄍ.7lf4@mv9MA\E=H\rR$( Q(vA,TH]^8(j D@Y ik Rfmw>Zfm f+LvB<^DHx\eH@$I,s^#|`Lp*e+s7UGA8ڌ~ξ|kW:Cgl5\Ǜ|Vw1{.Bw솹U#Kz|dTQ>}v\ I0#o fCHL$iI )B­ 'Au+0&QFdnrD)%ВM8I0֜`pטh# 26r^V:KW x|I(]f) lv8N]a=3cuzS٦cŧSYcT,9cZR&(7:^#Kc)J=<(RXI-e_7K1 c)kFK0Raҫf) [%$jKi-M2t,.B c>t >xz}||'ߚMz|xgJU){wr'7pD`8Kzj6y&w7>),0 z~{h:QmߤG:X@@ Cb@`7Tٗ}@d8IAMo@LJ.?c~c$|}Ws푗yiRjMD%ؔN|'=Ѣídݺ;2L+-plJxcNԺ( 4QI%(tu'd>=.Ԝww!(>\Uu@l+νI<2*զh-ԯ֦ D(ڨbƎp҆FY5}3{[6>4@-4g {~燇 ?͏-=Y 9@I.怱)'sM|z٩_-)3Yly+zhՃ#7Nsn` ЖPC7?͖p+QbRH3w7v8oz yC ގP.dRcV݊rVsc) eT[JD2i\\ E-I@*U:kb3@ NLTS3 P!FP8KF)ĉmʹR2Fk* T@˄!gGO1Rӯv>*mo|Qk4L]3$/J^{ziau +_sMO"gk/̉g:/z Z3/eЎ&9$&C܅X{>(ZDSj[,Qn]B!/ޙT+{ 4x+-g̝U_;8BrHLJAu۰ڲ)sn5n~ n?}YpxL ky׀7"}+}o>&#pa%,u(sQ Mf,A(#2q+N!&Jc':!-$ƈNTAe!xmFw찴*,@PGS&fBR UIޜ8b#b'R0-`- ֗L c.T%B'Ä+0( ŘrcUT ՚H%1XaͶtuRzA[Vvmi*mY1DC,5,!Z' #Q9UY5X2n 0c 4׀&L8T+-pxMps%v=3PRo8t_nrHQm;~n*lW'Tf-yMjI=KR"( (qjmKws@8H nۏODExI)GJ~Z'DrP.R‰z$t{PPpFNV>r%??=Oz=ڋcC)R5Y(k6_XxòʟzŒ:/?Ӯ+v]x7vBw~mv=/+}\l/Oj῍^Z>?Gccp1=Ifɶvj-'T",mգWaN}Ox\IO]̋},n7mЍ}?Џ^+]'|F>tvo%?!SL47 E[7mqV0jkS PZn6- f`y󳶋}6\3p]ї޸[U?h>]bX?lQ)nZHs mY slaNNzai9!ja٣κQ;Сglg>ZQR^\ymg;^`CkHU:;lZ0Q)BXWBܵ§D+^3S).847^IJzL/6Ui| 2j["2-X^>=b[^,_t2vtySP]1Ǐq?ƻ8m|,zWZoِNˇZk=R˪ B*qC) wl~9 |SR.Sfi4~~gBn]10:]ƺw:Uԛu^hhl̐; J}ZMhez;:| )zg%e:4X%:4*а}Ax{7PlR ݿ?֛EAZā8"De?Y呲Rs+ *)Uca֋J pF%—NJ)A/!*v[tI0]+@XٝIOeQ6d|B1HOCF)Փ/I" KFdC=%Ju&8?p8q(RhJZWT#;M2I&tNFOH=jw,J({,[?+,{;Olw6B+բ 32qh )5;!ن}܏a\`}49/u@}!Sm%!@`TO?l(&~T V<7kFVh<)p$2HHJF\MTqC#L1Is 5(ǷC!-ыœ8q $qDNd&wYdP"RYEJ)1T>`M,`_qJ;N%WwLؼE [#l^qT:H۞s '{BR\KgR:jsj:B9ɳm!tI`'{azMbAQ<].2dA"-!B=Ab Vg!W:^?0F(?.i .?맼1AޕX-vH!iߍgԻ"+d(%܃_twLJ;>ܳPhw<6A1&>|W(p\q^L!% DS _YNJ܈P0_1>pv~./DŽ -6G 7O+5v M cDybFX0Aq(M8S(WLe'K 9R/YDT@4bH\nGڞ/^kg(˗ŏ&z_9|׳oh#WWV>1\ŕ`R^G^O޴T!h^-* wP]|!ze2R)zewѫvR3mM~#=.h]ppm@}9^ !0s< p`S1AcLt^lH WΉ9<ۖOg\`IK8!oI,3X5r}Ȅe S/e Md}쓑Å"_z;kf}z3 }F6MİfoQXxOe5ڀÔ_ l[ڙo]D(6ZHxszL}B}wœyu]^ggŝ qIoxX֥LgZ+ &;=jaT B[VOxѾ$C,0Z: ]e;<)0Er_a9qH_!v+U[~Z%oJHPKBUC^/}ǻj/ ]@)m,fzØPFFI0ˉ\Awhׇ Kө(ᦌJi" ycًK|ב("<5ćUn, 5|o/΀Q8a\p$FT6cIOAi*9CMH)K)x?+PEWr%At0bzi]4©@s}MpW>:u!]b+pf#PgǧFNc%MCZ7O䛿.yF/ Y{ 8p|6cliJǣN֬n%cuE>W+gȏzl@}18{ėa2c zpX(w ~efےeB +˱γnǜZ3R6QCVZa L_Bbat"S0 EJE'BXLb,( m`Lc}R҈\FɄ_cNIrvdLdϺOf=9𛗌 VNu`jl:ahjxdPFWBZqH\qg E L wICf-0½0!T ]Um,ԍAP{*gfث2U(fNdM2 X_eKS]  ؋ vY_( VfzUU|PSV,v'lS\gۣ{4 ǹ_/;+0.2I{r-_/ѷMI`EC^P' 27Ҭ6DX$i_nswJbPVΉEW"p8t67)4 6rpF#έ ~cYg̃ (q[^&2=ih/x L¡k1o6Ъі@n|[QWwNX\ "lEs6Hn|YШsoVYC@z[{xSUXM\+"-_GZΦn?X)hQ_d/LD\`\50i%a~l 2b*DӚ:o0RP3nAULp՟!!B0R;7|ȺPݘ-`G.2 r r#ʦ+3mƽ䬘--kc7A eb(D#\5ĕ+R `m&& Q21&x&1Ҹ1$Beys,+`hE hy] 2IcР4E0I<6zHjwB>c-5&GSf@{A r_qxK?m}gq`9/kQgU?4Ÿ2gQ12Y}@\ sD.0q4 qi_"W4FV=((7[0LS55D0n UoS:M!%ݫ<P~xo>5|E-ޓta;ۍ㩢b[F{cS i|O#"~_5^JDBIQX}xgF({ǔ>>7?,DxGĦC(" q\Z#B ,^P8qU8]wt.O@]2wN䊙~_*HO{zm/)U"T>%ЋDx(~3(BsFњbe(ޛJh`փ>0N1z<2=[/y;:PN1&vʥf됢%@tuۑ i N0os}(SYI0|LeVx~N.uqn|B(>vsAFY"Ca<HBMb ^WĊJr%k0~咵]U":)Y5K3pB$;GZZbV~]_#dH!6 BX쪵rw[)I]l{YXCѢl~WK47:)QnRwNR*ZJ+7:(=3b3'Jr*(ƒG,D%KL M}&_LxɯpwxJ#wYY%x2zzdn8lX3nҐq v{\6͸+Knl!YN731p8=nΗ4 $mAr)2 Egi1?F+2i>#@we /Ry-1sGY3{ z[ð hT>Sd:[~niUDWu&63['Aer'DoXrJqc[lB1ʭ1󔽸Ӄq;md˞PvC~)~$kX%> c<{Y4ť>)͘6cc) #t}K).Wq>b<5ja6x tl͑-NcX Oq1Vo ΃8|`^c ){/yNCIC } -Yg&~Oo~$HH(;s f.]]-P-fO\"a˲SLRQ@ tP@`'e>?>|s-w_?ш_cOyx/;]OG3ʘbqRB2 Kbi2LL%% *S)Ji](. 9LO&cQJX-Z.d1M' ϤL `we=nI2̲wF ; yyDbFjbR,HVXEd 1P (gY( B{c"qR*"c!(I9`[ NyB)^͍Gm# &FK 8G*sUQ I rS(BsEBh $g.MB3df󀺆9+Not`'xc+NQʀci@'~cVaE> #.o|ArIw s W퍚|,ZчB[%Qӡ-'[s)M`Q$Lg8F #nXI$Aen`Fꦼbt‘0kzW/*n>YG\Œ0c}#fe|rb+P6,fTW&F}Wi"|ъ&Hw8B_ǏZDFR:3ߘl U&)>݃j$S7c#&*̃qn."s΃DcaOTdKȹNVtXϟW,:"zN%tJYYՎ(a"rMSljHʬXǵyfhݴy Z|9@ ‚NH @GM SH`Eai U[Tt,#T /]9SM94$ԥYi@Aȱ0Q@\w0պ/Qj%=s /҆#dD%yVK^.0+>_| g. >}zv_?^mJןW݊=Z{T|p3R_ׯ>?)]pwя\|:;A9RsBӐB;$͛Ηg~snon\|=l?]^*ږ/)1<~{vqι8Nc-dϿU7L&B2T$" YU4mI*( br̗DNtp7F`$1+FtT6>\kƋP'ѢtREgT`y K~ifxrZ^g$YXA \Q(^R(${J' ӟnϜ~0[?X=qՌ1DDזhIE q ,$ 4Z[ ZrO`5juCbʁq/sc͙!{k-G;FISĠEѩĘMrCqn]F䕃[ 7#K'ٲl;Ecf1ѧA\睷 P(=[Sr9K8E4Yx{^ QG e#1[*Dt鳲VkD*i9J\R M*ilo%m;H ]bQ0mOrPBO^-evy (n'O 3]f8nM9tV0OsetmAE[jSa+L:E#3AZVNTjjب०b(*U"TiĒRIdUv m7^90v 51/F+piW4t4U:[T᭺)` ;R:!'`Cf?:Ք1Z%ٳIJSٮž՗獻rJ1?r^]լj߄@ojЮyƈ!7T='.K݊,:zGW[K̂ 弋:r Ngo/.Wp_Ս?; EN{[vjX}̬&cGרtQ1(7$o^݅ttru9 k{?/||co?%[CrT8YH=..~u2[}vr?$y{wsvrw!du6LK0D{!f-*Y/:׷+vKy ovRJ:=$[w « :-oC# ɶ2*cI ѡcMёRA ᢜ5[WlZPqP+Lds@d/FZ1CRs*E} cMAV %<&SxL\Qm:]4zjwiny3*9(U6.߮l4\Sa. VJH׸_z_uȺxR> iÕW8%Ճ3JH 8AjyZa$KHI^aM,G>\ۙ ǂ}c .kQiڪ\#ʇLwT5I"t>G }$N[]cB!0nlMϒ ;][@cAо؍uR euv@脱|_Q>vS^oD}Hl:1W}"qrxm{4(>X׋_»cܨFg!t( f7QG7lJˎ%tQCLoЮL>\ϯx-m>^#.#.rw0oi ((fFƭ2jİ޷s>jymmw1%n >8X]=8Kd7G0 ܖ?rZV/-,oTǚK!E-ΞSF4cZ@!I=/o4 /3z@נIޚ8܈} lԮAE}pv+z^on=FWL$mIޞ t\κyÇk;0 kn=hJ&UgJ4(IP[rV:+.\H+hBV%c$AkpCWÃ7Xڪ9om𢂦;)Mu_lSB^x:('"Y.Ӿ(b#D<\]xv^ys?BJCHrNt|<:ҥicTP ]o_ כD~\lݻ {}4 J[OXZ:o4xF/+8{i_ T"^NKb*qwo~zQI,_^=Tt:X,j}$l+| fF#wևvbm1 (rdfc[dgE3+a:]ygrըCZ<=0%EinDi-G(KQGV,w,]Jy_刚$q8Y%qFø]%C9pͫoaq}+$A>ݥWu?oYq}u1r55ЂچBmL>@{NǛm4Bp7$Lvٷ']C%f6ZH[t#{Sp_;:"m}(f дiw*kʺD(ŴC5խ4RB@D)M޼Ysb?!+;~׋⏤G+ޮ .2;a 2h{3Tr]vUXsʁ&22"0 *ଛ^w2⩟[bDέsѕ9%{vȈ;8R?`ܑd84rm꺊Pyjy"c_>vlPj0-:htل 9VFd-Њ`R)_~;Kj15Ôr:OzJ 2/n1k6Fk+KmԜ=`D,Tϫ Ue}lB9`mYb:t'fIϒ) bILxrJ<9/8 ^0)x?4KSVB(l1E *JdaT#Z(TJ?=]%uw0?L\B}oiP_nm¨m5*'_jC^g;z%Uv;ZzyEGW,ߛ_9[z+/7;a/m_zQ\K|SV0:H"}УϯP9>g{ƵW7wbb x/7M;ŤMS E@-5ulK;E;dGȑd"mYó _$z0}Hs|h\D-A[)ՠ7t LCg?1qMILAyˁ6" bfLurAې ?Ҋ=C˅Wb«ArT5˒A\K5l/&jkv^6SthHf׻"Zܽ #T\nU`{L&U> O"zS~a~hDHcI=a%H_kgGǶH"4ڥCN>*t,u*=tKĄ䜪 {nN $i'ofN܇(B^2ĚhGQdWw{"-rIjW GTڇ!aDR) 1#}{\{.[{; Cٸy}Qs;C!?i(mu5JQ_Qۋ/t?Dat&Dǝȸ^oo'ݠ zar5yXk~1f'٫4]+xM.Btm|7^i"9V6RNaˑN"~At{Z.SxEq``GF p,||bP m.@l'cuˠ|D.dT0a0$,@D ™sCCi\pͅ0#%[WVG GDp!I;G?׼u~-I5fyD S<3yL'"0BY"ZFCm3+#*]JGm`K9 ߆Tk F-0S$|j8DDa8'>_0چ&a,4NHir u )Lzl&r9eh`\V4KwLJ{8uYt٤Ô,1?!'&käcn]Ơuf#\c~A{{y\b0ex3t]ɟcq7?:wP6hkHۇ_(eA\OkyV) *% Sij;h\yY.uiNqQ.b4T3ɪr %zr&t:+H+kppr ڮM7zׇT݆__s*nBO2#YF=V&)Hio3ʪsQTQX,bQT:3:}: (eEPATpq'(Zn/D_.Ď%xrr%5 ' ^pZ9iCRB)׻vڏZN۫@EU|OK WⰟ[a\ [Q'ׂ5OKWL܊ܖ)B+@D]hUvd:M]i(+NTOvSZ˺z \iRh:)H\P!lyAdBP[7Hb ņjH:% iWw(Bav^9ʩWNʫD`gLZ+R%jB)>fqBM`)fV "RnAnƦ u>J$O=VI L95DŽ GmK7dWh knXô݂Fޡwpׄ0;oY[惤\-yln@/mu;otPRXo{ N lR# B p$۷H\0:_$ˀ;8M:ʳvAx;4IA>xE{ФgӏM ?dwz՟{iQc3#aـkC<7 ӆ ߾tUw3}$~կGn::lsۜem!o~sL+aA[Vͅj65O&1BQ'\p UW9̣ SN^ATd4D"zrt/^<^WG_/W|'>}淳ӣދ?Gqut菋{xoW ?gG&Σx]0~;x:+z㎉]/\O.|sIݤP=pYsӲ3ǫݚSc?cM=< ƃa%֞0G¬7zh[0gH.D^@U*~p {`ru*o/0e4ɛ^A1ESKst{vuMKF5Q* Z?\vlƏ;(~S׷Ϡխ?_h͛Ad(L189aںqhl^pcހNua`:)ٓ޸u $MuPG7_~p1d]8~>}yG n2(}K-8 x251̷*ͽRK.EȘj*zK*t]D3⓺/(XaPqxiw4Hc)r;Sk8(s ?>T?T}_K3eZbwy_veBǔsϬu^?LȔoSu_0g-볷AE \{[ ua[qgcA;3, ga8A鶖m2C(Z30ہJL& *m7co|۟ &}Pɴ _>W7go(:$NeS΄뾋Aœ7gcz#A뉠ns8As#o+mANdmPJ IPTC§v鶅{A B轒W%/v$Ľ8$6u>}a9%Q*%Ѵ} u|P'$*4եһ^QoݵE4Yy*WI@Si@wSQXRtqOfg 6<`E&>s0m,p1#(bJrt~@B/y.CΙX+uEM C;w\LNfTɯn sxQPzmHb'KI CV~ظYkwqTlec8)c3On bnx/ٷ}y{%xi/0x2 [l+Vc$a$-WSd0>>g<1"iI SjQc:3,=KRh5rَ!QL4(|x3l l kLAbI&"K5[' -#b}3ReC5_!Ϥ?.dj:;JE&.Q^2JS7g,O" x'ɫ߷A"J*㩖*eUd'QVO60!g2Ds7VbX5\% )5.-b"B (V 4*.O,X/`0 T 5߄^jNtڔJ0o`,8!]#,EL 0$]$i2eB Vs7Ԁr51-n!\TMua VW4$vCCW\D㺎[ ő$ܹwJ%e+eAeTjwr٫g4NNw"w'e[_oFZ{.nX=x\X=#) eR[ׁO#B5WhCr57*֚w1<4P DHGU)c,Q. 5s٢-8a0Rs `B1B\MWCJ%Zq6R`wc7Bn ϭ@[Pa gj:$ z'w`?]LlDI,]#IH8 Uh .5pE4rɣQj6p3R6C3l=3f]Ėuƴn+"-$E x 4b\JQL#B) ^J1K@q+qNj6xAY8)-qF.&ক8»49msIsf=L+ѰaJs PV -Y!+@aJZ1QHڦkGcܬ64T'—e>i\չW dwի7nmRVgkxθgKnq:?{۸ v/3v&AI^20xiȒ#NfSMJ6%RS df"d(- JZ7Glm]QIjtORbآ~YIlOYsg^ܤ^*p2^χ SBdByu]mg@=⓫t OBke4N.(G?M5mgXazW :UJs맰)XϣsQY'NPvYo1l$L=o晁[CK8' 8a>cf2<- 8B( qvt# !弍8>̔5e ]0݇}y:c7UzﭴO:xP6$Wz{D "-6>٨ÿG~˥h?g Zێᴶ>V[Qyf*Фl}pWN43Ti=F53ό6SkI`?k0_9C6VҝCa&rJ]&Up{\Y4H>8 ?NO#8A |=|a@4: kd\sy60c. q8`'HA޲#;:?ݯp^fM+e@<^+)=9"JۑQ2|PB7h_Q/-xkk~݋Z֝ׯ%{oZҡ̑tvB]?R`3;VVx4Cp+}(zc&^VC)XΦ58v)xUcqWTSʀJ4&8x5fDwz)z:Wg(9:hy2k)!tIk !4&ژ"$YN*eoLr[?MDXםLEѴ`} Nx}KLN;˼?ytG3{]~roΐS;d6F$ _|tpm;\N׶u M|*1# %)I'DH3xGH  _\ϼ^2<}>t]̓c;{& f?c8q;/FY~9??|_. ;|w18s:i\nU_.lmـbDq|A#+-t  k0$D .1 h2{(0[󫰹4qUMqRYcXͭ *S]v0JH CHW+L 90H!R , YpX1e*zb+<4wmͿkk]ͿDʥfQ9 aH :DkDLa.$0="B!#חUZ_&qv0f\r " &/Vx8U$`]01@J+С@G(BZ `oQJf ef$i0S I8 8W%13 6Y-0_MbsV5tؗ2B)6^S/NT/ün `c8wT[]l`B4hbD43ThBO`H$ [6& R5"Z\ZUR`mV䴊%$` 0nMb_LH1` % IK c2|V5Se fR5D,Ht xE>qEA9i$cL 38+`&5J1Ʀ&+\&^QQ˗N~qeEߥYQAz],?Fy]`e_y*+*q/9 A]UjU;%ݸyAx9ڕ?Vޥ6㖛ěius\.g ՜**|kL01޼Y3LM#OǠE_xߍ.F=>n쫂W|e7gٓASM]rjåm ۸]V; s]bava!#7_:/7OQ")MstKkZ^~m<{zzyƔ@c<[8Z:OZuEv\lB-;z7ĩ3q"JW5I (•8(&VƤA7W'AHGp |c%0 ddh#L3!ݹ&$቎y8 #M0K苈y ̨D11 LB"ElPlpKH))=j(06Z|LβE@b?ɣy*T\RGJ/EJ(B0  )JqC9!N @BdE(Lk93|5V mf`dA4JLr0u*Vpqz>^8ũh.̍qpIʨ/;'TQƈKry}Rb@GO? uw> a?"O^2 0 7a'TzK>ݝ6H"Bx8`6v'yD+B y 3ao@LJe_lm\rŘzI?QH.MFHs;?t 驪bO_HPG < 0 <@(* `A"n>2$DQApF1kJ7SfJ5q7pA?0UDA?^烐(S7&R*'y{~~_w=*֙y m<Ҙ* $:WRv~72H1Loгܶ1bS\_^9ex~;Le=>n`vtv99#ٰy$v4YmПVQ^C6USr[٥vVTp&b ۼMw "mV.4So i! eq☺H*:$P^S*-bJArIΫN`cZGXr3,+{TA Յ-uGbu;жSҤ8T;N"-RG?.|@ 0+@qL`"%V PʱG6^RI vY,[y\TOmYQ̒,-gϖZ{uT-Ցq\kڛ.eb+ġ¢奭m L۫_tˮnɨүG1B&;WɻS}F5=.mޤ-2-6~U*t2Ka'}ˍ^K'^eOff7sKٽ2eCآzNfHaAexo# @*6g6nt-Nle_Mz4ps[gl7n{n qy"' 8;IR#(oóteyΑ[zi,/hh{BQuUf Z|hۘ4NEg3Xhȁ}d^9(v,aYt3}r2֓_V_T#ܾЙ}x<~e/<gA(NulN Y0ZϮ_˝4%ju)^=K"䩗Xe$!o\D! Qa]nU1pcn-<͹DMkMڭ y"ZE$ǖE`٩tIj߲grJo=*ZM#ij.'|4BU]S\PW]TݢܛoCw/*rK \^T 'C?@2Z!I2W#۟hPu> 8mi4zD7!R67VrV-Cn(0tBBsG (ld}MG" Qt|Y!뛋h\ ?\d=YW`dVJy}5(]zq'k2}xF\0 owfT~O_TAQ`ą7=}+ <7 4lgKo$DCu'.yFcľUZN(*',{ mA:-ryNwfn-Bٝ32b ѮH$ǀlu3.]c(5zoq;8K9o7jv[Lh^Z oנRҼ~ֳWM5lj}o#'Wf~sbo+UoK=U#P)qX a d@_ 6p`f8{ `X6khHAmp`]{wbdMj#h '[bGRmG)o)n Yb%O޷O OV0/'0uigK0g=r4OL_|k+QVD]%j&~ Q b1\$Q LC5ҘF,<a(~sb}>7UV[4*b[Ǣv~k֫eq[p5.ꖜtmgY͵S+eđdruV~B +ZH J4yȱP|Ma weIzYbh@;0mc3 xeB-}#WX) C-YQ_DFFFaMj-O5Gs6<"%UQ &'PSc+hqu|>_1~=.fo_RZF*AG1a0Lꈶf46ohh*!Oӯk 6[T R_z6ޕY$ XmgX),(vʆJyVD6bUD8/_s~tŪjp:.[U)NVl7y=:v;$GL֊VL1W>x̢y*xrդ9xqףy*jkU(ccMm~ kIMq aMV[ c| W 7%QbtժBPPwͼϷJ7^ܽ^]^ͼd4̻GU:ms,D!y|sLZhL"@#2&m+ڹs<#_qy<#+v! nM٩?UpioOkV__ח- WluNY [KX+TiEY%}*QA@}#jxtH[ʬ:B8i$^>aX%uï0 *TZ;d՜Üf?^`iQ(f s!zDu;k͙OQ";jl1acR)GR;Hy[ta0?ID8-R!E#Ap6f,0MF1/Pcb7:LX,S%SCD!PFp$JG2=J0K,_"̣l.7YavzViFu11D¤rG)"èiB{k[d)AZdU#AFDLcڀ1ȱA,d`;6 GP^`VI0 4F`cSuuS`19,41TS j{)CT6 @!=&(LlzC0>KD@>8#|3q._LȑgmL0B ~(|C~P%_^AF/Ӣ݃@')聖P 0tɈ ( Od2$0-eoX6 %넑VT+a R_A:`$.A})(&vmNLjOZ:76TX-&/S|t<%p´1PLJisPB4'S lErIvӏ;lfO b)AAh'+,4^-Zm=4\*ԎKV BuݚgF :Z*/x¸2+V,u^g5h-'9T*\كe[Mܪ >^*ֲ=7ZrRXzіsY.| ~ 9a| X}ʠ1e=my>8lysޣ\;a\h0R7 <4+ʸVGsnm}n)AdvK̸ݝs0 eU>ح.GXɏ2pus/(TЮ5pW{wvϵ='R] ;>zιx)3E.8u^ BHE8ZthVv=7්H'gTm@ k 7)Nwk6> [up}x 0m%Me}|ۚ8nAkk0+ta󊎅=V"_\TR8͈9+Mj.EA%*% PqMQ1RrIL\`T:./ήUq-Hr T' YPLņBqpH_|y/6* %VO/+K0#k\5RZ /_ݚRcb2oĚ SʀBtcRWLAC5/48 ny?{& {ԅo?'ئЄlmQ׬Q)ۡr2+Ƒ Fs>9A"[B@xt>c. ?3ϔ@)*M󩦄`q.wTS7p)̸Xe6.rYn0L&w7'~Oёg%}ao7}̏t LAJS5Oo?ju<Л44Nσj"Ҳk} e,k^'BRw@#/v_9ҳf%aavCh b?zn/bPFt>vqZitJj&$+:21۾v^vAѩ*r[S;n_nMHW.dJOi7Unu1(#:uQE]G"NՏE,ݚ\Dϗ)-p70/I&!|"$R)#%e(iDb&XW ):W F i#/Dw蟑 X ՊpQHni@*H,"W ^-f-uHsһ?g>ٯI=n[9-LEk_!ĿjeIT˽N%l)kKڹH; /TO_ڪܲB薠'.lPZBu3rpn ~GmF%{B#u\D\@1~|:\uK2FU"NV;ێx-xk`;[4źa ۅ.kAnW} 2]tӆWHyq5BlZnގ-ٰ8/JTIVH+c,?{߭㩀qrl~I\oP/wOt8pagft" ^>>>ȺƙXSm))*JÍ"Q G1Gk5̓|̓%ߠޥcq8 BC!:k\aMJq@| ~Gp D$qqDTPR|80{ca36x2O{ I[+) )c$`J$fZ`wFddDdCs"퍾@W0`b=֊[G JVi^[BvP´ /ݥ%҅uKਏ0uC'qq]w(Ib]dSC{] `>@Wh[>dbz3}c3f>{j0(|u?0=G}׿jrΧaTpyV00[r?)h艗  >}cRrf .d&|dy̴5!Lb@Qwޤ J=,p☉ڣ] )l5"*$Xo'iԂ`[x>ouGQ / \!YjI$ DqFqz'a<['U%U}$>whV%^wq<\a8z!'̫d^'>WEx̱Z5-p, P*r<($(N3QMq^>NZu~4sQt|JI}a%(ևy] S jC[ ʹ,ڑؒPi 7De1JR¬6:Oa^x9Na e!J7."-F;%FѬ%FN#(q(r)%(P&И2.,SX0ڊ7Sf){6\Ы4t9PT'Y ,zwYSe@}mak0Dx]YG+DxI}Г$/Y }CU7ƚFh"X- mjȊo6&wp6}4`ha7[ CGE[V=|gk$Uܣ /w% -T(};L]RwG2Xc=H7qkcEeYÒQoǶaDU&jUaa88buCA*͆4TGwc 2޼uGyv.`bzgcE4>Z2Bka+tdԧ9洳@83D_KV3.Dz#SF Xu9 Z@K`MRN NYzL$ꢸ2"t!2srY%\R?'M/,KRE=.G)Z|sd%-~ (s5(\Np w;#R4x^iV=tk%jjۀi|ѡ"`D3PWXx t.lNװ%  y,WK 9\(ɂ1!  wJxv) n30b[6I (8Ey) & @$Tro9cgĚR,wZ +tJ;~:-p)dJ*Gp~6:|o7d)bF2CfVY%ǎ[0n"9 2!tUj2]'R b]6Ȍr+>d{Y豼fሃOVxC ߓV蹘)doavǏ~>q77:p?A?_y{؏'ӥg8H򭙆۽7?P0 .ݛ10,;@hco0`|W>=1.;(հ $su. EԵN%0yIS$k2A1!&YP!F l`ꚸ2X@CdCIS 3bVuE$bF\S,tAbDHZhX2@,ӱ7LjAf&c [TV F@39jhnFxHeL^yF8 YfՀJ3WL`Fr- 8 bU(?K^"N'iZ vB9 ȲiZ6G7Ø\-8OӢ&1$НD 1(5{痱f7[ +}{#KRdUrn;rEV H ;J0Tfy V*_01lqۘD TG32(mgc0~Wm\o*xW*/ǖ(qkcVch8ɤxfpZa0"7T_s<Z#97]HqS)ʴ˷؜"Mn S$]? 3v_<=EJº]sՠ9u)>|Ii r ELaՌìXkۃwd)tAFCEU9n1dwH8T)T/ݺlIM2wXs8F[ZyǻPy[Cyn>1L^]ںP 9Q7bT!}[!ACu~}£P곣7‘;hi%j?i eIJ4H&P+@;٬Bzѥn 2c21vL U`Adyi2bab+lƕ6*%@nUaf\t**W%Dw^U-$ `nS I(!C`aLX3AfM ˴K w01ܙXxE)|~)N$\ꂊ섦Z\{Ա(Am~)!H!h|}8WoQpwS9,O40C4z4oZC*\ #>?.b_?y|y˂¯sF*Z9GѢ꒟c [^=Xcĝy\yMjZ%Wlý˽ * By H40߾y~ LJck1o^4pB3gkT"6?2]p*L,o },^I:QU-eC55@Qސ%_Hbv 6{8 ]F$-j+45l^qO}UQE_—E=.Q5ԗI:9Y| %~-WUQ6-$5F ӯCp+l)m̲qEcP1} ߘ}_RzТ!=\k{q]s˫F<եį!e$t5v? ;Bu3`hP_&@`P|Mî^פXj2U$Ie %y F̂XS/2Lc5~f19 F)mМ8da:Q>={|ba@ D.VV^Bi˭S|{ҩ%f "aMDN3 O4"Vk4q:p:#UM˄MP)iQ1 Peyai39XO`eVM }qj!<Ŏ f<1$ c *U LC,HH$B휔]Dk1{{Ml?{==7,}`"?yՌ_W[[ҙW{ߜB)~x 6VaŚ׾ď.~yuq |o\Bwor,ߎ_<<5>|?_y{x2]ٜM |ğwdrYOg\'['([ʰ0‚ӛÇAW8T2E&(j0@\j~6~zpbiޏweEm11VHKZ*iSKePl~(7Z@3,ySA"(r%p;Qw(>F{M<|%BL?14\ƚRQ58DEҘ0;"\,cA5S$z'smҞlkBߘ4GS?]ƣfWWdqTWF)a:7\l=_~L%Bv`qyJu?A᥸9xWlz?[?Noә;$I3|>Dzx3k¤wipk`wN} %U]pNCŹɪJ|U#.T2O [e?\Lm#3jcQ@J 5G:ZQTZN [~1jb`(Q%k|R$gYO=Etj7 =\,xFMʨcic풣!ʳE,k:J!\jŹQ{&-Q$ PQA q %CK C |0am:d֊U?Ea"*YQQ) b] eoKY'̐y udY}<M]9\ 77'' 8qc xaO,>LiL].2jm=?0n_"/c!y|qp'_A=v&=:՛%uq,vyQY஧q<1W3*% "u/gaJ1)9U.JLkDϑvuen )Mݪbb:UQƻ/Z\[MnuXwn[6Em$zaȤwE&Pp.Ӵ9-,lXl%QiIoEɇ|6eɲx2HlORB rdh?>6cS~F[ZXRHsb(ЁIȴ)] !l̰#f@_|FP&!Lko?ԵE?* |DpQ>r~WE A.r c{Զ<^^0=fh%U;'aF Zid{^aZ{x.&S-&)~+$ ū񦀔.-y<ZdT<[Tݲ ]%c.#SKʥvOe5T"3fdi+g\q^ip048{AVU7@T8gR& Ro)3eA9mq -zadiY4P+`ȱZZB$U̅FO"CIHHP#v{WfO Œl & X #"Rt!HV HG c(ƈT;َ11`f@ЙH Xl"p>#Ã8(6,QsP&PMP^'-/UÌ©zS^)1>zexL- (ϛ( qxrX Qވ BB~r-%Sd|jjj7}e1(h>OsgJWU!!?)~K4ɹ0b#:eQDptԾ[j*$'%eSR*do daS g-JQXOj*հZO״qgr Gn-~cEWKY&TS^oV` EL}qZ}_HRc3Q>}F\#!)n 5q-~*!Vz|0+N /p|ǭ-T`-ŭs"/nuwɨ*nGB+8Y8Y i,*;ո*aV wX /;` +*bvn;,$+bW^B4mw?@utEU]HEJM(51H-sH+"nBŃ0ḁ&,]ߩݩDku`dri>g!)a6J"FL!4&0刃<`;(1Ȃ4O>@U w|I91)U1E@"rkЖ!R$XaI-V84 a\i;0@G?nVHcx[WjeQti?o}\DU<օoUqDAGt>v;1xJKVzڭ ELŹݔ> ۭ,>SE!wJlnUHO.2ELMZ,9c+jȕTP%  }z`X`J/bʫж;.(5\TTD7%0A#T gV_FDhK RRJNjvPI: L5>\ou3 ";ZNϻ.^]UC{ѳ˿npy? Zט&?VY0RQt.FbEz6>ʡlj>eB7~lw%8gșRM W,=ƓmzS5n М줩qlV"7-6ʻp,Dy yq D~N;7> y :kdrgrcs&^d,ޖ.25)L:}m!t<֧*%%ޯxi;Oء˲FWYrqLVXX[Cj-ر̐rLJ>;J.̵l LH{DVGsP?qБE4MQ&(;wUG4)^6pMDxq㼸 P1w4WįֆCPUi Y?pms2//3Xj2cloRwf57q2wHPigC t3PD29llƝQ#ꇟ0?5 ^To8A9~_w_Q{X45qBՐ;ָa MVxi|/ka'xxZe"A%><@C3|a `DzKy9YӮ1gdG5Μ@%KV~S A 5F8c+_[ 7!*VNݟ"C$!yzI|'jdMܻMiŬSĞ'p垒*yj3%{JZES՞56>'wn~@uZ=A}v4 Fys*ڛ:6`gQ!F"Ĉs(C"T=A賧4d[;z?XhVATM|9xZI9˕3'DҒir]< rAT<9Ζ&y(f5XP'`=~1@WÀ3jN)j".&*"\{ҬH^SxnFL7ol<}&IM[.~ٞKt3u"p_O~ۙL(ƬD[`\*!>*a8R[!V[H`4AU\*Z-[TϏvMUg}=t~囹\z;B2fR'GЀYnuf`21]Q;nۨaapyvo*gi4]EPd4{)nY2y_fL'ohwBު ~ȎZx|)q? x^3:݅o߯n{p )5gZw&/m濽l'&Xz7NWJ.zAeWc+6^\ֵ߻8HSzۯ@<,9VYT'Lt }@$ ];tbB^[ޱΩ +鏣h'B2~ { xbIM܃"k:mgsZOhްb0vD(^B'30JuQ!Ukoۅɦ<^ުtB|MWLE/l6| ;ٷwvm'MfG\;u[6n(jtۍ`BK۷QQ_@3~Q6ۗ뫁#%6=O_a eG_3y4c2y2{m _SN} Ô=)l$O 9woΎnl۶nx9 W`}N{'0z~73ܧm-nm]͎>44L)|&3x溋,0 6ә,#!i͝}fP6=MHlnf߿9Y6iwCά_Yw]Jd?#EV6ɿBC؉Taw} ] (&Vo*{0-$1f#!Jk#)0FcD1r)S8R˾0<+}EK_Y3,Υ/TB/\; +{xO/+G7gȔ,Ȓ0"/VùSܨ|S*nTg0~Fc[q#Y(uei>L 7C;͓s3S7d*wf>[/N"IauMQ<c2A~}٠E MAD9FbReZ4;+l! Q؋;'ŀ>޹SC)WA9+R9)envj;KIK)V~RRCJ&T|鳔6v.!ۨ]Qz֥- I)M vRe1,'-I)!ۨ],', 6>R9~kt Re=iK)~RS1|RT3DviK)~RJhӱ[oH5@pu/k$e4(1rkgM$ %(0r.JRdԂ9fB>^/bVZ?S ,@F)Jb1~`6•r6:"de,[Hh~/$1\ɽ)a 4e$&Ɔ!H\d1( (aL C TdF 2wa8V˴1 5LPsE"C ADȬ7]5M~(VzZ!ĉNEVy#'9DZP4\GK( 0"X:d `GLjlib.H5dbNu'9dob.( ϵ;dJ?)c[XdHk?9+S̒*/^*ux˾MإZVRF#?.hltHrUbk  ~FoU8h[:hPР*A=*TW-<RV;CE$tzfΝxɝ" ^f܂tmvAOS}M5h5tsI$hݓpcBfI{(v"=킔 fe|p\;x,oj^ 7zw\G j'J(M%4wXaLMӽwd;OW_&tsQ@ǰX;@Yn ?G_5s풟2 ~mpuEp˫Lq⛻UMy4FkzT5_Ϯ滛^>^}?o"[ryGOFIg %u?/tLkZPk$c+*ۇymUOxJүZٴqk8 #e[eR&`xdoʯY :2LFzTW 2VRY FulN clN8. !_FuPP=Z7ݩV4X W#}5SMo6-6 "=o6:ɯ-9|/TW-kСuD׫~uw_rLfbEߺ ]XA;j݄Nuq[}D.u9J2yU@yVQ}G$kuGnt^ rsN[{WUC%ZOU4Fwƛ;->6E2Lf&=[U4J@NCޱnMGubb;8Z1uqn}h+W:9X2z^E"qsw3^X kԨdw?'wW`ۓמ:|T6ӴW5{y٦yuu[ҧ>ۦ${QwZʥ*^h; -OK+l䅾i1j_; 0 -CǭidU=Q뢞)϶vqoZzZ~)Vh BIh)z՜7-=b-sUGwIhW{#RR~ZZQ- ȵT*?-$0=ljDxDzxU-T"1Y5W. g۔ltnο$WaU][&F ůuժQ{q(>R|b|OYr{{KU6$vŢtr_a#̘B 4Dd$J$㬓>`i"z[S*rTRMz2Af߶e3>}lWR"eH%$e9'4:)wc?6|yߚDF!IJ DJIͯ^iyq ;9hH33:2XpެT#\MU.B% yJ79dQO-bo6]oDg8WNV/:srLlVtUK⦰ wKJzY~1?5isM\-/`S#S(RD.r O&-"SʡDWdM%&2fw7*Og5+Sbٺeo9ú4jc]tG*Uck{2k^OnЧ.e REԷ-?%,Ȩ߭nVϘ\$| WEzhtr!ՌOrOYqS6ZdŇzʪ*"wr/GZ[qos@ hk !N.LG=tޑ`Z0j{EjPa6сVFJ=D[ˀ ^>5,fcל+6 8BY`0l˵ z80M!Ҽ1XZH1ܔ`~2Ak0WoUy?}`6ITDt=;4䕫hNɮ'}]f$@cy:nϜL̺TzZ>4䕫N)ORURh=fRP֭ zISKBsF %w#DiRҳR?bJMbzA6=P=%JކW\~+M2K8@I]0ii|}q0Һ pOMKzsCbLB,= ZF´-23796ڹ)]&Ig$d`yJɖJa1T,efӓeYPuiyC$8;O d6kiAbD%N9N%`BjKT@d &A##Qd慾DrϑFtTDr eRR`!z@i)7,RtN㝲 L%Z9+ '͌M JLdʥMhRL$ &-X0cQ!5~D^%q\J%d.IsCflJ,I!S̹c<=`lWL6)$ϊ6(hDÐN Bt0bulQ:H'Á4>_vS3H 'H/%pG6Td)9EnMHDhf@t®He~|P1$nvvt+ tC( mkM克jܔ6˴,6鬩$+w(ڷ5v:DBAd!ruP^jyGb%OF@S@jgt@`^= j 7&#Ԡ^\y^M4y^Lm>n5L@?0c?Vi.}M]ܔ{TeZ+ -&*c-`&Es)9(MuI1\4 4k\ƴ Vaש+Jra0IEwi<ᅆT%Ph[ZJi &ik"m5 efm=QXY\]4Fuˎ!|!'}Ri`lq:)L2RrnYs7Z^_] ggCi伄Prw%wqqPʼn^!y(-Ph+s=5gz*l00~D5řxh&ztόdaHe؇t~^kMs/^5ϝj;WmZ ]' kV F;*/S)L jLV#11E<Vfyq*<(.XFc /U,e KCf( \eUh$%\CzJmLzeis7׀ɪ~;wYu 1:wu0#_G[7#T#~L}~C|庂WĞrd_U2<؟IZ *(}1 F'.׳/C2P \?lerzmPZCۉO$maV~2^ۻ+MlxNBZi(V'owLr-v+n]6OILd!Bi"8#hUj -k1  Xi;K8Q#x%caPղMEКaPS;k,^ø@wװt}]_$I\HўAd-5akX=wߠtXĸlG&ldh{H.W?$MtG`ko3eyn]wO-YwݒYsXL>=v $lnW3~?m=y {4V ཉdz].O:WX/vb׋4R-fWzY'a&2|T\O;<bJ>]^}{۷dbF(wl5ڱ{6{tAb FGs-=GY"@j$ÚxnW6OON獼+R'GhnSOɀ_nxyo 5:v"kEqoN@},aOq5nm>TZhAlLJ}FTuߪ\Z?p_6j91#呴rMfI3 N߆@*`C@q&zkG:ؖA  i(l,ڡ7`"`#Ǒ%={\͍Pg@)@Z!3E5uDwUYv;%^RD'F=uI%ԉ-0J6a\Q_jx=kE]kZ`Jrea{vq! mߦ!@p;P%@~RHD59Vv czxy0& ٸ6-o˓ cAk*|#^.䡶@$EA ^⍖9a 5LIc~ѬEOFO7!d:atsTg='zY]P,85i~hծj+P45ݤ:bPq_R,O7Yf^2<6˼Nt-=yvűbzϯ#PoWtGӚ[9m5cRf9ޱq+*R7\mUIe6_&,[:I$%Q%GԌ_ntNvcP6IIнZ-I7.>2E0=évv偋QG$HoͿҌvkCBqB5A:ڍ!AEt|Ǩz\-]k7Bڭ EL!'Tg%$;9B@Җ|bDz\PGnX1b2FݶC>yZZ1?Udzr$_}Ī=fsdߝ'ތ~|Q6:^_+օ+hOWW5D| Fp)Z'v'p:Z7/&f vkMC0A4K^5g2.t.f@eǠيX&12c4Ҁ d/k'jt . b^f]Y1Y yaَkӉVKXgpgck6ԟ*^S9 Qs \({w$J9Z2@xk;S÷;1ز'LRrjjPWN1!z:e, zm1Wu֛:D!u b4`{:DpZl"ԑ:6$0[.$INYJ$"*$1}* P;4/$襐@{u e.۾W] nX|~ʨxi $T (0;$rC=ޭsuFHBSxD # kXL4!ihWֆo` UtGyu2Zj,<`g%[ N0J8xRx[>AȚ\K+zP& Lh$z;g3'Nୀ|8({D1g̴9#$bblN2"[X)@81g 1?“a=$ Pjc`fOA#b &:f4LPe]/g-75q10c*)030XX 'Jr(Ej )DsCIrt9=d4fdɀ;VY+!E|-&E;x|D| +u, .? S$dg?%&O]̸߲[8~cq1 ;!ԯŃZd l֖?-r?|wU*=0oU>U6E"\~'mh?2_G#XN lwՂl%w'?]>] |מiP>eEuK(oE1kASg/+l6nR ćc- (/R;RƐCNi+fECX< lAȄ $Lfא$k:" 9& #`qcVa1 % beDzF8kߕ '^ rd8`GBs]8ag3] 3h"Y.+r{t;.W9"qbͱT|=Ma/m[dq {Z =av"6qmoFs.1O=c77;ui.Peu̩3#Yr51cWRsTu٬X\˛47Ӝ˛dOm4Z)oRQ\:8I{dHlXIkmrC顉(zNNezBDtN#$P*Y) 8IpF ޅQթT% uuMҿHW._2)Ӡi;eFk~rcBUS!رG6Ged?2Zf^Pנew!-m*pI*jPCuK[- a'i?/q$ u(C$ jɥ ݐ2LI$8P $ŠӢǸ8X8sf3a==cݛ5)$ _){ QJH|#(y&.os7s+:3"8~?ٹ?uͫ=J"ր{Tz,^FGer9z49QWWFbLBz4< Ӛ/E\0%laI>$ 6 /L쮵ksA5Q8h˫z̘|gi&9&w_cq46Rqu}(>OWA2])c}ȨT1?Y>gttj1/֠[Xa^e&H-NAU~P>y+ ;Ÿo_d\֔L0Gd]9fSg_r}^pa¨(Cy5ɼUW#3sĀWO5^qpQʻcp0nyxqV󩭤nOF {{d%D4" 195 dzNVt2U\("i@AcX޻6`7l7"FB%6Gl١V-eÅ6/_j8:0Ƹ]:~Z\vE5SD袯bc-0/zʕ 8pH8sK0AӊD38%}H?2W > d99ron?czFx_A-tb6h:עygEΊͻh/Y ƢHZsHI!q $1R1GH$F< f &cI蟖)77^<%|eFc HE3XcVU)$$QS QBJYKB@PdD$ÀĊӘ$rP!W: pK-j,R/!ޝ_?kq!n"ǘc` k`#* $ #DH z,x(y1KK Y=+5o*z, ֺAG1s6VBPEIHprˌK66Ͷk!_c}h;vҮت6c4?HQcD2$!c4 6: nȲcboZOk ;Hb_^ضY+ en;YI Uc@`zכvo^ѣ@WD )45[^Z4HG`#Lv=yl/D+lLH2zS`|h7|#u< #\:uծY%Pj@soq9juG+4wyh9[@Nzk4@-Q?frvhWAbtgf;csa3d-^t>Wrv۩JN 52F_sJ۹9Ht(VO @pn믴3)\('CIZ/[ګqUe]栓9.U{6:e/ߋO,e,t|ŇNK o *n8& <<,)5&y Ѡz\;N ?_B)m([g-,u͚-!"X3-6elg$AnS7'o|)a ?c[=ƾ{s_z͗.;F6)>]k7 .ڭ E$SQ#3=]N2`SZyfkxQ([Rpq=D֥䒿.1)8?KX>>  QJm,./ \fE~@ş=wM;U u[(Rn 2fm|z'bR>,WP`P -\zD| k6H j5"]pMS!I맬ἢS4lf$)Nj M|hӭou|&/[*ƴk1fKcW8j\#e,HAHQh!ADySAHU3WHvyyu.uc4\ЧFv'zA@nm˂Neu?{WƮ_{٭pi% ,TwZCA/RIhRoVDO=3Ǭ& +a#~̆I8$V;;L:ObO?Fj/1!e,oZGrrrr T.8V_RJ3B9&q~>hTGı[-FS6?ݳN| 1QVZo~:kTzACQzu,iz%^+?aSב.WdNFB$?6U*a*f<:JPźxPim /w,HFOsMJX0H^lCJ۟"|:9gb:c)Oʟ.u~`:K;qN ,tGJ~HMi\;fZiO,Mx`pǪ$Q?p{, w7I}ȴ+_\I2჉WSAF4)@8h V,d_$dki6p,Qf-JHLak;C)ToˈlIGUL<1\\}&!xE!:|&'~2} ҈j`궜y#A2usZzE۵9f5I- vK洴14`-9-zw級ګ㙙~D5(+n%vLjVk,l܃ڬ'1ΌU,oZtZ՟6~I߱*u,ok]OoPigG~h{RU3giة8C#LN+@|ct|w橓FՍ/?qɣDAV-}huLْjOpW^зΑF=t/ˏjt]@S_;E h z\J0Azwpuw;S3L^J\2M>6ngm7<ڰu4}K>FšRRX0l$}xbRO p(/FcA0{&0K&'X$!mJ5` )@4L Ը?${&oH*QvKBo|Ȍ0)OT+ZL3G1ཀྵnã:¦~ _#TI; LQ*(KX`Hi=fwTrcG6{ɦJ|IIife4Ѫ3$LgBP1u)}{6d~+{RvZu'zkzvT齭kt? '}}#0Vi<={Pmxyֵ!ϵy,v]| CyC V$x_/7<\}tNDHF=yMv`vKl9rD[g%'eG(9b:80:5`J_ixR(^>~ablo:73hYrDlhd*s ": ؝9g|g;dz+d.yxT@;C!gQ&浢Ӗ|䡏XUސ2ڑ{>[utk J,%t6Pv=cʽ=ScCzQLN\:ԚT JVOXr|u ,(z;Z+fre*Go-x/{QebɑIZ,P^򌥚 A4 u:GB 8홨*c:Nw; D!XXkss5z}+Lj{}>pk Cix سdZ݃{մ<̺.>JwxAKHHG|{]}躻ʽ%o˽G]d)ߜAc‘G᥵/,J=#*wZW%׺kTڤ(?f~s3Y5ޟէ+>]kթޝZj;ZT,~nub4NM[KvWPdYhi~G|BZOK5@&XO򾺹Xe1κʬ~\ׇÌ?D%i\QG٪9W~yêzuylKY]ů/?RG@(QQ ؘ $[w8\nOVu}Zt1M>.3O0|="LRC'\\ 4}X!<:.)Q|(=HNaW~lRu<T"s+?[eAUӊ ,'dB= [B3-賁6ZmYML_xqDW%[8 s]îR) ߻w~parX.-H5KPHYIXktX=g|b3RW2bvKy}_ZV]S ""c %ϟqgg4Ti<\]A\ľ c+gkٽ S2;*'ǪieާOѶ ԁghN[CB0,"Y%9NZ -N;XqQxiU2r@" jJPuLZv7lJS} xUw-)2A?O)]' db1n0ț+ZϟߴsyGᩞz/u^Io؛@ "-hyK46[z#huoo{ mB2|[DinydjRJO{ER+'Y|z;.ą3@ "]dӂ$"$ZZ{JvtDx.] maYp?RXC| Ϭhۙ$5 ")McqBڻa׫gLyWk\`A׎25=N@8ܷӑR|**(1 Q3lgJU1t\/ @n}unX^~XŖV Kgo>y&Z.҃y,SiSI8 |W7&:_~HZ05@* \]e8)/XȔԼ>>P.a jFx>,eZsh4*0),A E>x9 Lrq&<(5!Qʨ,lu D޽[RAkk#6MXq@SܷL "L{#22SBN}aȯiA~砷upHs=!ΊD 3f:5 57lp]͚ZvhX$'d yeՑIٺ2s1ʜȂNY` ktS.V^X_omY3r^sͤ>Bnck| Y@\ ZR2DIg_KJI\2e" &C q@ :=K2}d.Fɠa'_@3yj2-%q.4f k^h,7O[k@Myɽ J^\3GB#b@ϩn}_b܊+Tp PZ)% `.m-\;d( G{[!·gFEP_3洀Up6d@"zPXoǯW`12?Sik+u>D?O/ I$P_xʋ \wrBПG[n. ڛ @E!ɩ! "@qHsCpa5tQ}hk+U^8teP:6/"ah +jx`>h=}{E)|aleyD$ Θioif ÞTHg̡hAapC r17,I fv\wڪ򵓁Ka(|ti{$ly4GDFHQq|M{v14҉B jӐU N5lxK? n;H;$/>ϧJH~kZL9װf]u8 4h_/(qh} .G&)+B +>YbwU4̙)]Ogl;&`Y4RV wߺ~n/AkP|h.$:i`\E++Jvze't᯿/WF[I6rGy 쭱 (ĵHp8yf]QBa)?\ƶYޞ5%s>DbiP p !fѫhx~і iظQyP/)q6GϙPs3*Ki^D# n0H|>ep֚h""jY<ٗdoV,)؜D}>'."2G/!3FBD@Yݞ<5_uVĀ/cl7` #6$*&2F)vf-+) 5\k\s_jovN*F+C쐀f yT>w@OE.8(*G<"tJ#@ rp|m5Ks)Wp/  :2d9Xu`Y CMFxɉT{O߳&ҕ O#@|2>(w]Ww(l>%{?| ]n_QR$wd8M94%X829K’E" Q|]_(&j|(~N5;prrԛO3tX٠aɂ/PV&zf?LuEA-'x7K1? &3_;SW6(P'W̩dؑzj/#¸ V]se]Fd>ybeP_w `ٗ:?q͎ @^\"7T<:8-<ރ+ a57/oN~iW*,Y&zЧO.]N;3q6L_~\kʲ+:dp`G޻a9wG?,b1ӝa ^N/( g|R֠$r)_01Sj~WŻʹI~`pO*=z3[j#m1[øz:TGTw9!?E"zysgR0¹;T*gK8YswR]yP'DoT'|i߷"˶'xƚY/y!Y5{;qvJ7x@S~^^v|,Wt^Hn)3 оXךG~pPWz|.uPr^]W3Φsiyk%|RS 'ıO2JMu^qp t9TS|xې^~_uE,+ |l-G8'G?L#\N@N0[mj _4,H{sɩ2KF I$lbXH aIFq! y>N E\k,}Γ^J1 D8gR~{)–`%3аaI_>GAʾ.k ~] ME<#]ֵ 5,C='C'WH*,D %QaH$JNB1Kx)Lȉ} E=# γRww/z٨6T6^X, GQn/9\rq.)==Γs?b?9z9=>؁ 1q?.;,Fl=mئ߹дE./a<켼2cP&/\eZdE1m "\@ V5"A*#:#>|-'8|CV&dJ/GyK{a7J? jM0_C` NZf'jv]N&^%Vs:xȄzYpvw5|1*9 <ɢ.jN%U:?Sˏ;z%0d{dcֵ M+v'XY!|sya['$4泤 pdB,!-E`{מ$kp#Mkb*R'4+? ;DXns0Dc4D L#t;!}"4f!|Q ;g}zg֊4|d>`HFj଒H`Rȼ_}yS0Yk Jr>BaXN0R|Ua1FI^zyRa;\ZPZضwZcto5n]?j0h.Q@[$o ާh.^.*Z`JiJ"4偖J4vkCf|C$9lo$$=Yt?{7(#xs&#%^/8[>iK\lEنԒ.97ɿ7pG#4 l-"4V^mɹy8J_@m) qY~@x}Il%{c}w%q8kM'#Ӽ04-s~'_':[b%k'ooFuO巻H@;.ّa|o3k#ͳLHŒioumy*&q^}F<}x*nW\ϬY]-TZvYrFX2&]Y4U,joNߞŲe# EL= wm 5[[4E}= Ԉ'n˸j.$211oqnn-iHtZ>c'pyi/5nH7.daQ}E$ؓC2,Y~Eg1MfB[]˩ ꅵC7X=zY`dٷ7|GTGl+"V:b[PR+b+ܨyl8ugUSآi%\&8$|Y\!"a@A"G1@eȀݍj}g7/`b3an2:y }>(VVSw$GzaiL_ *ݔD˘EJ*REu,=H""22"L9`$5bsaMKΠw K7728Rls2Gsmr2Ćciգ*QDE;̈~[({=0ZJ"CWfi"`cWIVAg,8`e5BTn_cNlrg<6!rc}ܭ[Tq[1YcE.W˧ϓQǣa*A23Mlp(&`Uo=0K̆qqN &pJB.33Aet.,s zc4!?~LSȱ5Xo $jS1>rKl1:VPt) OLA1CADbPIlsc^5!WSm0M!nN2ĞbdDRAq=ϗzsO}XKzBΤ1}ɢtІe(B,cvn#^(!T4)r,ll""6!ҒS-Fc!Og$lenV\'P)z@ H^y3 Q^vI]8c~ xQ^WINŁ'.XeKюw<qAD<@i_eSW6x#N)) '/Bg CRinhյd X0H(B^Nt kRQX|4**z?⏸`pz p>Cor UQTQTQTQY5pM"[eܖ f%*J#b"*f0+P ~6X{J75}`?!{7*..ӱoYsKpH1:[kwJ& SZkn sP&˵2[+CM eݡ7QlވC$?4Dd\A#Z s0JB uRƅjH(g$F2\]EV綠\ {mZ ,YR6 %cwIAz< 2%] m9$ mCC^6){?R?nB>XNwԱnGp1oEKnmh WF:EU= cԋ?M9HQcݎLK;׺nO$Z64䅫:%/ZN6X'TPYM6*^Ll66_}*,$ v}+*d1 u`l^Ȭ]92&GI9Bk1g&#$Q# P,w 2W[2s%˅}T,M2c4Ǚ.AH5.9&CH{rJiۜB,ӹ1xnMxW|EuWqcyk"A5 wu~OVbZ&PtV \+(CTa2b熊 KGME]B)X?kHֵLk%iuD5oyTb2AfŨI%I/0Z{؅{]ymI*/̅P,cB*d 4D2t}sc*<d k䣾 `ř%2d{94e!bAi09kG8y x H3 s1J3*"-$1MPb H % 0L[ᦴkDeB{̘ R w. sqσlw^}yCGdXMEL)`bIiܫܔc1]NZ v'R l5&w}*+CvD@'b]R)0(vj.8 yPcϨqHwY@΄yL[rw'o#Z84gk/Y",!#dG(P!DI&cU).zMǻ>-Eb#](AHKz9sbcyby=^|+4U8eJ1Tf%q!`$18\TD|S_mb-E)HQ]D2Y~I3:=.wDw8S~Xj5M 5,`xn8x()Z? asDoJsbFS{)#@ d@؉9B8d6֢# oZxӡsճR#} ƕ͑ԘR$IL >o%y΃eyvvRYlՆ? 'Kc,/,.vւ (BARSIF"V떍&fݷQ\t;G-V/W59Wr/ᦢ[}OO:CJ#8s;'D̋g2 в ADdg5?"77'0$u~LEpq$1c݃?2R`"'/nwj=<:b"# + ϻYU?O*,!H < )UC 7$1^!iK](ݼ(Zi!34'6XRb&gU뾸SlB1qՆ,Bk{÷(j:ջ(6ĢiQy}lNI'C1@\r9s]UO96jMmfv{IDQ~ŤҬyi߱RD| fujEK&i;?L[MV].z8x?Y^W@|?24%\@ZRt$6,NU_}p)V71ޱҮDmܾRn{mql=%wzp7s!}o hF$1 Gu:%Gny9wX:#DxXz@_hwf6==Bu)%'j">Л3[ڃ[ . Zxap4ta})͍t~$7) y?NX~?rtj%qnowxpэ%vr ׷a-.4}ǪY| #&*YKZ6b`]5 %x~%ML Ҟ*E_",J\!/\EѽX7`S5 Euu;%6wmݚ>Knmh W&:K}uZ7Q[SRT;X#= RnҮ[C ֭ y*NI o|dUvNDuz3 &jr:M/A\6)mlOׯ^%Ky[Y#C(*eܘƭb^я♱"¼#ʸKr\7`OzW vW;m9 c_Zf.'b1=;=Dϔt:I!nBi[2VBG_ovg;o@BUޥ6[*o+){L4r#vg0P)L `^׍9̗ʁdHEЗ ~~3{XYl6@R,Y:#9_7g$̐& $UN&ƂIXJ%DL =RGL)Kqu%Ò<m8#p818a>bqSw\VkUq_{ u<#Q6UyA^EWkÙYfw_PLF_޻nțo$$]ٲ!A9&{jHr{&g[؟f< ׅGGGCN]>r?@,]Eg1"MρfmzJ>]˅* uLP)HnҀF Fք )`! DV9n G;s(HwO IZe. w'_../CU˳9$>(w V\r BQ̣#9nOˡ)A,zko$wy(C$>u50޹X# WygYYTR`J^lШf Wn|!ttH)I*@O&Gp G Lو 9 '^Rg`j'Wr#(086JX$2Fi'tel VVP ;K c tj /4hG=$DS&-O@j0̯Mv8tIh{D.8|[Jvϓ%ɻ+rBUt^qÅ|xyir:{tq % (NJ윫/K#9M~zKgWF9g HA[qΉx~O%`ò!,P+ }H(r :# Z9seR4>u3b|*olFSАo\Est!q6y˺y N EurǺsH٬[~ZuCCq)S اVŸN$}mVd֯\IձoM]O!+In:8i#\Wrʙ꘴Av2}:狻XI7q`2Ӑve;Ԍ5C(]UWOorv'ӏ^bqqi7&a!Q` PIiHr,s`YHШ0$|E[?O[: 8BJ,ŨP-<0}@+2es/=3i(7`nv8^0;APlݎp`uSn/C /w}Ie M88SAv.U2V{XF袊'7=J Q99D'2M|lT/ҫ xkK*j\e+\=uM?l"q98C`3.%8((7ͱHi/5= pXk:w I Pcǵ:xTx@A:$eDzYI1s*Sj R5 kXqӉ*֦fDDv^[\3Kj^w)8fM7d'dsM4Ȑr"eu00c& Hg+㨒 aa%Aĕfw.0H()%4Jh1bvʊTP@!mARJ#@b9i(T?hK2Q@Y 2QfۨG jeJ¯m iB2M($OrFyA lV9b @՞6N٥ќ,~z( oy[ >qK}`?<)^DŽ 7S\֓yqZD]ݼ;%"vz?'?7*"ozi&Hk듫wŃsTӓK{87d) ɷ&Bӓ땨qS`nG^l~D~I0//vS"CC3kJ+&/!Vp#=C_1i]<@j_K䀕Pspn(B2=0Ę%P6u߼%n`{I{앹UH(G&Lįb2]Ö=eӲv|9b&RgC9%h2ϸwp b,zUǕjq%⮎+OJ?0| [Q˫gGQDC wby(n0LkUOQ™e}T89dRڦȏ;۷)')ͯa\[Ԓ }Քא&TKI6YK úON/AC; f []Aޖ|ؤCTǮO2Hi>]>L&5w}>!߸f:zӺ1Bu˕A>u۟1YBD6|*S Ӄ8 S`re:cExLgjv Ѻ!߸nJ)U pӱ3-\Δ;vTo1Ic7$;Nݱ)`OձSE= JJorcu&U||mvf9R?l*Ԩ!G)damHdTj/H>Prh7( XtIԇVF!D/(nPU}pmY_i#_PRT;xKpQexۛLi<<uv>M&!C)|%%<ިHBD`Gu|+ax.eǷ! ,#¢tT Z*2]/iNÔ+(Q+ZL-y-T DJbn 50fdZެj(f!_`PNNkoŵU4ENa+x]E$*!,! k" *&цX95+ 30kS!r"MPX;gkS4?[\4NzgoR:,`>`C$*F-ĸɑn!,9'q|0f'p?/no|kC(UQ~.Ga{'z߾|9KFU.fƊRQ`(ER!Uth-:i) ^t)gY  J.i@!+1 0Q*Π2?(V9oYϏM}}{!T 7sdUWmFuP?=ț5.u 󮆐46Uy=Χ$48+ K$a1BFr)+ώjEGQ}DH/_:!0=>KNhKm!ܐFHwߚ.j ooBw1T$֛R7;Cb4cI~B(p,'lP/G4RZh8KGs"҄<W*t^.i'jqChWk LZpV:ZkV~ 6\~E]i1?i7S#xk;u^^0%XåY|gb%Zvg+ZnP:ֱL'xQ_oӴwW_XqyQWw{YX uAIZD=*.G+ or@}tydqw'<Ͷ󲇞FH MIP8֝<[?\˫g7dqmXYDOuOL(`V"C gN UB 6C~ySh:g^@(G2.6wN[>&= BYO=y bXɷdJ^)0tڣUفB8"'}8"}'H)؟7Ф ? ~4,hYkZ1Va&TUhm[*࡯juj5Q$}RK&d>|C?x~IRVb6rGn\Yʮs/~֮:9ק H!ؐTw?-C!b-v !;Zݒ4b2ĶVRYCQ*$W$;ӞE3eDOm]VqUt*&K?SwkpQ'tߎ1Vk:u՝hJ7T#Pҁ>ʔ}v<"P9qʔnuUٻTsׇ_2eXzg <ў{ʔQdd~tm^2ӯn.cʔ5 fLyU;q0uw wstݜ^j>661ぢ޹~y2_'Hwo|Y[(cO&&ip;va|J.\2'L2;ǝȂJ<' ۡ=NL g|<1{'+?_φ(wRZ*jUsD"n Δ4K)%smmmN.|HRǨkRQaXdʄT#Y()_4)%=_j{u򉷪0س{S[>qI>\:9ROAڠRKMI 'n?,\~N=h9ceNuUw>ۗO#P)yIR5T㉎l5ETQʾ$A126qkԍ]{LˍFw&ӯ>1S&ČB.a\&pHr(3>z9Z(Cf&Qh22j8g? ߉oQ6V^ se/==]ji&jD %+~'׶߬!]̿1AcT A@"xaXu*eHvsWͭ]_13\^oZyhپs;eu+\ycvԀ~wr-mkZ eoϏQ/0U'ҾvZ[KRc-EkBKǣεuj;!ݪ{yyN]mͿ.պə_·so9"xj+\2sW >eȶRsVC˒.y_"uBĪ(WLI9̯Uuy޴Ԡp:?_/D'B{DC"ɠ \dB8/R;EźKH,tH%:WYMXUE!Gؾʇ3S¼Z7 Z\/TV&J!dH>M?Td+0 %:Q7}^U*l5\/9۪m l9^>{4Pf g! 2,I)ᮋ5d )Z>Z{lڹT3xz0nj*I&c YY/9V3()sӋ?/Vr7_^.RmOC) Ȩ,1pv*,^ڣ5nj1-ȳr)Ronߌ8h5U.:Fw_BSB#mo#N:qQl(L0z̮jlHq8>穲q9|5g.?ow&|{+J`we$r}vF#L+.]֞J5QI0̀*༑ʁ Ã&Cddgl۫My\uK1.^đ<F$[YDoޫXśMM^/|ŇEVU,!*hY-:۩RQ̮au wD%<ًeJN'Y?&Ͷa LSRm! V*C5Dta?6tvSi0NUٶmtc9RP fDrvNfY,&bZMs..^lVBz d-PE:^Νў31t?AB2eh[^"$Sz Rq 37%SqCCGsAߔuH9$О~DFy DEWVDQmTQ:Tm=ãFSq+z5yF?u!kՈ Psf Cv纐WA?^=(ދ_VeAȼ+W5 21 /=1@9uOWC$28wFw"xC@g?Vz(kvt4`rf "\AA1X%0CJl|bǃR73#cEq6VMiO^0E+oIC!1V{qag= nŅ j"6%)js7c(0>h5X]n$kc'/M_WWۗ~qkZ\쭂C- *<DyiHI6w5nk_FlJe"r W"XkU*G!=mz7V'}A?`#hc>Q*#IU,(iųRy %EmMÞ+KAݙ7ay۱17-o`(H)اRX<*\qxI{->},᱀G ?0ϣRw"Y>{^n-[ڭ۲pY6#6/ 9}hn"wkQz-DKoza0M`mVAA*J K Z#MV9^J1h>(?Ұ3'ޣ&vb`] ۓl%F_?/aѷ7QԄ*Al (%%]%CE˱zǴ@Y+2t"EUĠy8mGK }.ϫ"`v wmmIzlS}j~la0@vLy *QE!)Y`VhC$D^uN4eCZMR> H"ǀǾKV Ko>'^ɸQ`\Izy*7"3eHs =8=/b7NG.r19ICZ C9#Gn(ٞݻmL4>7;=c6>)$4 -XĈ%I-hNJcuQ'N8rm*fUqnre}_ke8N O& t$ۓKTW@Zo/d80LV"$H5w m`ƱQ K.~j&PšA#|bk.憐zs8qPHDf/T-&ơDbzQC } 5?LI10{v5?ox ̘H>hOz)SD6+WMes?ԤEG?)AO_ӆRW1"%B7~{qn-B2w@2HIX(}a7JܚO_7EDŽHYtd_ճ7`{c=pw?ewQƗZRP(A>Fs1 y (PP )!JTu{tZm׫zXp.c)}f9Bv׵bH&%^-ʤhum{bk/[?[$pI!5њ(ތG xt_+X& ::`i2" &Sp4'b[+>*5?6rR'q2Ӆ{'B猪n}`Giښx6ۤv_:LJb0e~r1딤wuU{Y\xvz}|iT7)5ҥ0]ז$/RRNKFu0s!`)YsujR5*eʪ pQbJ7c: PzZf{b)*l4gDbHRzRד2p#$c] Yc~MXo]w:]Zn}ϳw}pZrbrWBvR!hMO`Y2\]\ ŢvuUfƩ:+YjOYK9'E@TGj \_MI uoĈNNP.:KAFY`3=SKp:cpH;olT6oAK-IZj5ju{d.53pk;vywUJǝu>~kmH^J(n 9G'vr`so72d' ly 8Xc^Q|.P F.Ehb+]BW?TKp>*U{R1:ޙ[(}>]o޿4%+2tBL )kȠdFt6lOmU =$_ .RdFФ 5u^C`HfQF]裇<2Z6E :s;D.Pyi(8XŊI8/8*+k mNv}pZK@ݤ>؍"ٺ]?5-f^tT❤].g\z60_qH: K}†UՒX:್p  m<5(@7XOҙm0f€NJYldx?9#L"SŽ ZTj n\wÅR(^ӧCceA2-2N3'SXwe#APg:̍ic9֢vk2A ! R #zVϳ6_Oٻh$eE;vǧ7{pK'bEscꆵwe`V_Th"ORڜ?dG9g#*Ec#)\:r~mՇ׊ 7?s^ym?WhQ,MOg^{vwSK>PӃ0jhۃH2x1I[[;`:+JR]cSd\V_z/_]WpK,+#YtrjV?s8_xA>W ˇ7w1E;_8_sRuyo0;<˯P-s&jpz, ɴ$9e7PBҚҺߵj?ogvYHԾ:=YIe9H XƄ֚1곙(|WA@.z3vIRa5dYO{EWjKy:p`<jD(/H( Ћniz]?|xсnsYϴ O≒4{$)aퟞw,v +\"J0dW}uBMbLVmw|ATjn)qS3:q^ lJ삧-C _ o)O MfQ,%B$t+P®yÿ=|Սv}ˇL}w-L/%H/?$.='|IޓQ&H: y)8J)JEւ @%!\|1KOfa .}\f(TK5_:8#Џޚri|goCq?66qm56J*p%D^:ٔH!qT!,gNF\H8*#/4p VB!d2[ jc]$'ն.7]R@ҧε0}wPLsudLR$AMgr :AVK2WTEikg1-2D E<2"!Txd熄ND qxDX {NKoH E; ,ŋv ;+h&%"ic({gt&'7R?O"B;YldXP:1$R.)5Nq='!_$K?_/1Zsw6N )ޢ\U8'MVҙ2Y:ϰk4Vޢ?/T'>^VޜDz*Z@?N82*©}܎EX_o[nk?#8<޵ڻ Z4 ZkPݵU_3ZԎ梑.޾k@8ߚĩ~v^w:_LKhɍM6R5Isa(k*J%W/b$SsASPC1i>e</{9Uo3([f?Z~PGNvmFF叜\K_ZA7'_el-&Uw-]`[tK/ v9ar :Ni5 B+6XhJOVHb,SKLNҸ:ȹaIpϴ! ϼZg@BWUxB\NYl V]t+Qr' XlN ~wF!Fm\g"]۳ z`ucm1"Ec$0{DŽD$ѿ;Im= hųv=k)2 P{-ƍ#î;q>`1-{ f&i%Q1kw=?vdMz3RRHV=>X >^]/C5"7eBJMEY|qRم{y̸ZɊ,1=5j)ɻwk@?myQB4чUKts SfmB Et.x׭^.gDFϚ`΀hs"|"R¬i-ö˟yU;VO.!nmogtmJ=ߝI5X.ru|vŧ/W{F|Eu?;we&SɁՇg٫Gb4Jպ2yZl 0\Qg,٨jq*6ߍełG&E2ACÈNge0@Un(}(hzۉPa߬cRNx>JdưR+w[:q(\^N|loBю4]j;RԛPYg[.ʛb{.Wn3os-.t:]jO6wWw>^*_Nyx |Jr?beo7v=g{~D U]9o}R1L9q3&H_캦*' y&dSL~nnN3btї) ҉i[M4ɦCF8wr-I}Fwrvhޘ#лa!D;۔;xgzAxso=w׋~k{ŸEfSγwGWQA%g7O!l<'>VДZU9`f R\YX) e2VXvAA$GL *i޹6Ti9EI9R\bIVV*^/Vƹ?A_TLqӎږ\;eДU#\/UN2` uQr zj^٥:VȇXO.1_.7I" aW%P3JvӔR2#*%e-jL$N'PJCе7'[]+|Wfгq]> #(;t 0JjX-= _y7 ? diiylr*Yb"#x~p9F UKz!`<4ex^%g˴~va2"v~Ύc9֩ҨW/*4Nt T???q!o7n~๼>=; 8rvUȧ w(WD[>|{qyN5-2CJ-.L|U!Qzvp)H-?]]{χ >Nlލ.Q > h/ο~ =m7J^Fq8 p)5G R_vQ ݊HEt|O0 Q ~B2Q^ Nxp!lLs6:0Ep3 PZ?ҶVv7R`V\͉.F`[zS^G-Vy &-wZ#iKSdl<&cX]=0J(rc81ӟ1cA`wQAb:î`M9'җ'"!'VPlwlCHEGd,H&u`wм# RK SoG4e0 *QCw߇@AH3bhp^`{h' y&bS`Fw|-I}Fw.:US[M4Ŧx\]&,ZS12gDyh^rJsOm.Vwџz}}\NܰԻ*ֿQŹX+6ڕ?;)olyw'[w*U4djkVL6û$7. 8Yw XWih 4tuĽ\J0du@J\0xDt/:Z5\rf2́7/ qoGgІQ\}Aw{s|*?7+x$';Rb{w[=td"Oe"OَBFKtqB9,rA}ousCؚT՚@5Kx @"CUF:JYF+JIDZF-mJNrN4 H9IPR 05+2kBN)U:eQneQȤ4˕%zV g=J#ቧAg( 8T3S5 a}^k:, ׊17Wt;MFڼZ΅tUڽ0oA6`u2,8.G0B;0Ã%&jۙ)']!l=\k?-,(A1AI)rwhʂ (ifJ\?銃@ڧF:J$9~lRۢ`R`Q=@c2V2,): VT BʞSS?atZK2?y}X+7B2(r׻12 ޑT BL'1m•i)kh^$Wnm {gDuUCTF?xσ딖KG_K(E2!%'V>B>Ҿy ,tžILPutAYKM-C2PN97 Ƞ<ӁԈoޘbAޘbz r i4,y " -Q̯gZWXIP|QX͠Lk'mڵ"t2VK|\xoϰijxޞR xݽInuZl\$B0[QgM8p8qfaf:x _-t=߃4dUH{UVEoyw4Tt#n];M&ow&_ǼRR MQOzn!Di7+֌A{z1slއU6wT BR_R5?u:Dfga!Dl;ލPsnN3btqө sGn\7ޭ y&dS>2 A#t(g/jނ-4PF#ՊTOA0e9) )JP#C>7v%*9p Neh1)R3Y䪠]R[F$! n0*{UL͞var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004755145615135642221017717 0ustar rootrootJan 26 09:06:10 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 09:06:10 crc restorecon[4647]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:10 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 09:06:11 crc restorecon[4647]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 09:06:11 crc kubenswrapper[4827]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.457958 4827 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460742 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460755 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460759 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460764 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460768 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460772 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460776 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460781 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460785 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460790 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460794 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460799 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460802 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460807 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460812 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460816 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460820 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460830 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460833 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460837 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460840 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460844 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460847 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460850 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460854 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460858 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460861 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460864 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460868 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460872 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460875 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460879 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460882 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460886 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460889 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460893 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460896 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460900 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460904 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460907 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460911 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460914 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460917 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460921 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460924 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460928 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460931 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460935 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460938 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460941 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460946 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460950 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460954 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460958 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460962 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460966 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460971 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460974 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460978 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460981 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460985 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460988 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460991 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460995 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.460998 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461002 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461006 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461009 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461012 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461016 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461020 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461089 4827 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461097 4827 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461104 4827 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461109 4827 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461115 4827 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461119 4827 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461124 4827 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461130 4827 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461134 4827 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461138 4827 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461142 4827 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461147 4827 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461151 4827 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461155 4827 flags.go:64] FLAG: --cgroup-root="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461159 4827 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461163 4827 flags.go:64] FLAG: --client-ca-file="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461168 4827 flags.go:64] FLAG: --cloud-config="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461172 4827 flags.go:64] FLAG: --cloud-provider="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461176 4827 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461181 4827 flags.go:64] FLAG: --cluster-domain="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461185 4827 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461190 4827 flags.go:64] FLAG: --config-dir="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461193 4827 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461198 4827 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461203 4827 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461207 4827 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461211 4827 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461216 4827 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461220 4827 flags.go:64] FLAG: --contention-profiling="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461224 4827 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461228 4827 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461232 4827 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461257 4827 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461268 4827 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461272 4827 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461276 4827 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461280 4827 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461284 4827 flags.go:64] FLAG: --enable-server="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461288 4827 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461294 4827 flags.go:64] FLAG: --event-burst="100" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461298 4827 flags.go:64] FLAG: --event-qps="50" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461302 4827 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461306 4827 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461311 4827 flags.go:64] FLAG: --eviction-hard="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461316 4827 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461320 4827 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461324 4827 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461328 4827 flags.go:64] FLAG: --eviction-soft="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461332 4827 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461336 4827 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461340 4827 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461344 4827 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461348 4827 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461352 4827 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461356 4827 flags.go:64] FLAG: --feature-gates="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461360 4827 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461364 4827 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461368 4827 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461373 4827 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461377 4827 flags.go:64] FLAG: --healthz-port="10248" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461381 4827 flags.go:64] FLAG: --help="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461385 4827 flags.go:64] FLAG: --hostname-override="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461388 4827 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461392 4827 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461396 4827 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461400 4827 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461404 4827 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461408 4827 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461413 4827 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461418 4827 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461422 4827 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461426 4827 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461430 4827 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461434 4827 flags.go:64] FLAG: --kube-reserved="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461438 4827 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461442 4827 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461446 4827 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461450 4827 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461454 4827 flags.go:64] FLAG: --lock-file="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461458 4827 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461462 4827 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461466 4827 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461472 4827 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461476 4827 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461480 4827 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461484 4827 flags.go:64] FLAG: --logging-format="text" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461487 4827 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461492 4827 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461496 4827 flags.go:64] FLAG: --manifest-url="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461500 4827 flags.go:64] FLAG: --manifest-url-header="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461505 4827 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461509 4827 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461514 4827 flags.go:64] FLAG: --max-pods="110" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461518 4827 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461522 4827 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461526 4827 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461530 4827 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461534 4827 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461538 4827 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461542 4827 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461551 4827 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461555 4827 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461559 4827 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461563 4827 flags.go:64] FLAG: --pod-cidr="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461568 4827 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461574 4827 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461578 4827 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461583 4827 flags.go:64] FLAG: --pods-per-core="0" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461587 4827 flags.go:64] FLAG: --port="10250" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461591 4827 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461595 4827 flags.go:64] FLAG: --provider-id="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461599 4827 flags.go:64] FLAG: --qos-reserved="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461602 4827 flags.go:64] FLAG: --read-only-port="10255" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461606 4827 flags.go:64] FLAG: --register-node="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461611 4827 flags.go:64] FLAG: --register-schedulable="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461615 4827 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461621 4827 flags.go:64] FLAG: --registry-burst="10" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461625 4827 flags.go:64] FLAG: --registry-qps="5" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461629 4827 flags.go:64] FLAG: --reserved-cpus="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461645 4827 flags.go:64] FLAG: --reserved-memory="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461651 4827 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461655 4827 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461659 4827 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461664 4827 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461667 4827 flags.go:64] FLAG: --runonce="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461672 4827 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461676 4827 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461680 4827 flags.go:64] FLAG: --seccomp-default="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461683 4827 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461687 4827 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461692 4827 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461696 4827 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461700 4827 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461704 4827 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461708 4827 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461712 4827 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461717 4827 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461721 4827 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461725 4827 flags.go:64] FLAG: --system-cgroups="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461729 4827 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461736 4827 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461740 4827 flags.go:64] FLAG: --tls-cert-file="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461744 4827 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461748 4827 flags.go:64] FLAG: --tls-min-version="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461752 4827 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461756 4827 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461760 4827 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461764 4827 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461768 4827 flags.go:64] FLAG: --v="2" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461773 4827 flags.go:64] FLAG: --version="false" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461779 4827 flags.go:64] FLAG: --vmodule="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461784 4827 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.461788 4827 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461885 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461889 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461893 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461901 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461905 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461909 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461914 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461918 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461922 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461926 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461930 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461934 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461937 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461941 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461944 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461948 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461951 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461955 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461958 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461962 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461965 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461969 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461972 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461977 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461981 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461986 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461990 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461994 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.461998 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462001 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462005 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462009 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462013 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462017 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462022 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462027 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462031 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462034 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462038 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462042 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462045 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462049 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462052 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462056 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462059 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462063 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462066 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462069 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462073 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462076 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462080 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462083 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462087 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462090 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462094 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462097 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462100 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462104 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462107 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462111 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462114 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462118 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462121 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462125 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462128 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462131 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462139 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462147 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462151 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462154 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.462157 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.462168 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.469989 4827 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.470025 4827 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470087 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470094 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470098 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470103 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470107 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470111 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470114 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470118 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470121 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470125 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470130 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470135 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470138 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470143 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470146 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470150 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470154 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470157 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470161 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470166 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470171 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470175 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470179 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470184 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470188 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470191 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470195 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470199 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470202 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470206 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470209 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470214 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470217 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470221 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470225 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470229 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470232 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470236 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470239 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470243 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470246 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470250 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470253 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470256 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470260 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470264 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470267 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470270 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470274 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470277 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470281 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470284 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470288 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470291 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470295 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470298 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470301 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470305 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470308 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470313 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470317 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470321 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470324 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470328 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470332 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470335 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470339 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470342 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470346 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470349 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470353 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.470360 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470493 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470501 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470505 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470509 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470513 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470516 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470520 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470523 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470527 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470537 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470541 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470544 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470549 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470554 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470557 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470561 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470564 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470568 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470571 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470575 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470578 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470581 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470585 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470590 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470594 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470597 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470601 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470604 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470607 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470611 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470628 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470632 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470651 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470655 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470661 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470666 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470670 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470674 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470678 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470682 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470687 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470691 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470695 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470699 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470703 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470707 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470711 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470716 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470721 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470725 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470729 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470734 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470737 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470741 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470745 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470750 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470757 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470764 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470769 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470775 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470780 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470785 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470789 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470795 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470799 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470804 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470809 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470813 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470818 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470822 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.470829 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.470837 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.477668 4827 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.480376 4827 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.480468 4827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.481158 4827 server.go:997] "Starting client certificate rotation" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.481176 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.481338 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 16:32:26.919275008 +0000 UTC Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.481397 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.486409 4827 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.489861 4827 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.489954 4827 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.496936 4827 log.go:25] "Validated CRI v1 runtime API" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.516458 4827 log.go:25] "Validated CRI v1 image API" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.518771 4827 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.521307 4827 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-09-00-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.521357 4827 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.540981 4827 manager.go:217] Machine: {Timestamp:2026-01-26 09:06:11.539299106 +0000 UTC m=+0.187971005 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0c72dade-aced-4c2f-bbff-04b65bb274fb BootID:7d8bb801-e455-4976-8dea-8e9cfca6b87a Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fd:72:cf Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fd:72:cf Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:38:e4:e4 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7a:b7:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6e:22:52 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fc:d6:4e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:be:7e:2c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:6d:2a:33:d9:94 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:82:55:e5:de:2f:51 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.541344 4827 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.541607 4827 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.542189 4827 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.542468 4827 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.542524 4827 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.542863 4827 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.542882 4827 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.543176 4827 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.543233 4827 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.543626 4827 state_mem.go:36] "Initialized new in-memory state store" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.544092 4827 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.545041 4827 kubelet.go:418] "Attempting to sync node with API server" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.545078 4827 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.545125 4827 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.545159 4827 kubelet.go:324] "Adding apiserver pod source" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.545181 4827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.547534 4827 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.547858 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.547928 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.548044 4827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.548149 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.548224 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.549149 4827 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550019 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550070 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550091 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550108 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550136 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550153 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550171 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550200 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550221 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550239 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550263 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550281 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.550606 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.551543 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.552933 4827 server.go:1280] "Started kubelet" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.553601 4827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.554163 4827 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.554253 4827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 09:06:11 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.557099 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e3ca25775d130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 09:06:11.552891184 +0000 UTC m=+0.201563023,LastTimestamp:2026-01-26 09:06:11.552891184 +0000 UTC m=+0.201563023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.558781 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.559107 4827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.559131 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:53:26.981392434 +0000 UTC Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.559187 4827 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.559194 4827 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.559710 4827 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.560733 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.561007 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.561099 4827 server.go:460] "Adding debug handlers to kubelet server" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.561625 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.563102 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569886 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569930 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569941 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569949 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569959 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569967 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569976 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569985 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.569994 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570003 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570011 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570021 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570030 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570041 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570067 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570077 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570086 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570094 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570102 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570111 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570119 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570127 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570135 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570144 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570154 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570163 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570173 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570182 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570203 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570213 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570221 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570230 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570238 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570247 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570256 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570265 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570279 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570289 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570297 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570306 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570315 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570323 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570332 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570342 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570357 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570366 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570397 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570407 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570416 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570425 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570433 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570441 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570485 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570497 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570508 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570518 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570528 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570537 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570547 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570555 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570564 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570573 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570582 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570590 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570601 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570610 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570619 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570628 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570653 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570662 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570671 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570681 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570690 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570699 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570708 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570716 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570724 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570733 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570741 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570749 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570757 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570765 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570773 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570783 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570791 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570799 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570809 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570818 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.570826 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572625 4827 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572676 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572687 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572696 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572706 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572716 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572724 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572733 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572741 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572750 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572760 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572770 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572779 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572788 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572797 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572807 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572823 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572833 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572844 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572853 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572862 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572871 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572883 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572892 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572900 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572910 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572921 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572930 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572938 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572948 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572957 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572973 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572983 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.572995 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573006 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573017 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573029 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573039 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573050 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573062 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573074 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573082 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573092 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573101 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573109 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573117 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573126 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573166 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573175 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573184 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573193 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573201 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573210 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573219 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573227 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573235 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573243 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573251 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573262 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573270 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573279 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573288 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573306 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573316 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573326 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573335 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573344 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573354 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573363 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573372 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573381 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573390 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573398 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573407 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573417 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573426 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573435 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573444 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573453 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573463 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573472 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573481 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573490 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573500 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573509 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573519 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573510 4827 factory.go:55] Registering systemd factory Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573527 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573533 4827 factory.go:221] Registration of the systemd container factory successfully Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573536 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573546 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573556 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573567 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573575 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573584 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573594 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573603 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573614 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573623 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573633 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573656 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573665 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573675 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573683 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573693 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573703 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573713 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573722 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573732 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573741 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573751 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573761 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573771 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573780 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573789 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573797 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573806 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573815 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573825 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573836 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573845 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573853 4827 reconstruct.go:97] "Volume reconstruction finished" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.573859 4827 reconciler.go:26] "Reconciler: start to sync state" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.574840 4827 factory.go:153] Registering CRI-O factory Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.574864 4827 factory.go:221] Registration of the crio container factory successfully Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.574951 4827 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.574982 4827 factory.go:103] Registering Raw factory Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.575003 4827 manager.go:1196] Started watching for new ooms in manager Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.575696 4827 manager.go:319] Starting recovery of all containers Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.593231 4827 manager.go:324] Recovery completed Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.612619 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.614278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.614319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.614330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.615017 4827 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.615035 4827 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.615055 4827 state_mem.go:36] "Initialized new in-memory state store" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.662329 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.699763 4827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.701543 4827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.701602 4827 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.701627 4827 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.701704 4827 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 09:06:11 crc kubenswrapper[4827]: W0126 09:06:11.703686 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.703752 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.762836 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.764574 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.805772 4827 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.863422 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 09:06:11 crc kubenswrapper[4827]: E0126 09:06:11.964348 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.967928 4827 policy_none.go:49] "None policy: Start" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.970985 4827 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 09:06:11 crc kubenswrapper[4827]: I0126 09:06:11.971032 4827 state_mem.go:35] "Initializing new in-memory state store" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.006065 4827 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.039628 4827 manager.go:334] "Starting Device Plugin manager" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.039720 4827 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.039735 4827 server.go:79] "Starting device plugin registration server" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.040195 4827 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.040219 4827 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.040703 4827 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.040782 4827 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.040797 4827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.049054 4827 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.141298 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.142412 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.142447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.142458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.142482 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.143007 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.165817 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.343673 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.344735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.344782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.344797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.344823 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.345185 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.381898 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.381981 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.406284 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.406424 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.407721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.407772 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.407786 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.407945 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408128 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408159 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408865 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408841 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408894 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408903 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.408873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409048 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409116 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409142 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409816 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409826 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.409939 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410042 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410085 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410627 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410649 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410688 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410733 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410814 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.410840 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411550 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411800 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.411829 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.412660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.412694 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.412705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483390 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483438 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483465 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483488 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483511 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483534 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483659 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483726 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483798 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483882 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.483959 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.484008 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.484056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.484104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.552443 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.559582 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:33:31.966173046 +0000 UTC Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.564197 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.564261 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585832 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585882 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585921 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585936 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585977 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.585994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586040 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586028 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586076 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586054 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586109 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586040 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586130 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586154 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586175 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586170 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586224 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586230 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586188 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586202 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586214 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586249 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586261 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.586195 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.638302 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.638408 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.676513 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.676597 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.741123 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.746146 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.753883 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.753931 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.753944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.753973 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.754518 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.760787 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.771404 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.772478 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6f88b6fac29c2be59ffc794d069348aebebd7ded331d80afb14fcb9568123c6a WatchSource:0}: Error finding container 6f88b6fac29c2be59ffc794d069348aebebd7ded331d80afb14fcb9568123c6a: Status 404 returned error can't find the container with id 6f88b6fac29c2be59ffc794d069348aebebd7ded331d80afb14fcb9568123c6a Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.779414 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-12b7cb13d5b7b1102ee41ceea6879b3fb218c2b34b7c4c74bd22e5f75393c9f8 WatchSource:0}: Error finding container 12b7cb13d5b7b1102ee41ceea6879b3fb218c2b34b7c4c74bd22e5f75393c9f8: Status 404 returned error can't find the container with id 12b7cb13d5b7b1102ee41ceea6879b3fb218c2b34b7c4c74bd22e5f75393c9f8 Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.788408 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.791006 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-60818f0786d54e649dec69084c58536b293d0fad8f88fa039cad7515e98a80a8 WatchSource:0}: Error finding container 60818f0786d54e649dec69084c58536b293d0fad8f88fa039cad7515e98a80a8: Status 404 returned error can't find the container with id 60818f0786d54e649dec69084c58536b293d0fad8f88fa039cad7515e98a80a8 Jan 26 09:06:12 crc kubenswrapper[4827]: I0126 09:06:12.791922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.813987 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0fd878cb2f64af00aa2778fbdf477dd7333e9ae731eeeb55b349fa8d594da43b WatchSource:0}: Error finding container 0fd878cb2f64af00aa2778fbdf477dd7333e9ae731eeeb55b349fa8d594da43b: Status 404 returned error can't find the container with id 0fd878cb2f64af00aa2778fbdf477dd7333e9ae731eeeb55b349fa8d594da43b Jan 26 09:06:12 crc kubenswrapper[4827]: W0126 09:06:12.816843 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1f90e5216055a24666dff33956e08b8929e2de0616d99d0d15e7db232feeb030 WatchSource:0}: Error finding container 1f90e5216055a24666dff33956e08b8929e2de0616d99d0d15e7db232feeb030: Status 404 returned error can't find the container with id 1f90e5216055a24666dff33956e08b8929e2de0616d99d0d15e7db232feeb030 Jan 26 09:06:12 crc kubenswrapper[4827]: E0126 09:06:12.966841 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.497117 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 09:06:13 crc kubenswrapper[4827]: E0126 09:06:13.498787 4827 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.553313 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.555321 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.557206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.557251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.557263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.557288 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:13 crc kubenswrapper[4827]: E0126 09:06:13.557691 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.560203 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:16:21.594305582 +0000 UTC Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.709884 4827 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8" exitCode=0 Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.709972 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.710108 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0fd878cb2f64af00aa2778fbdf477dd7333e9ae731eeeb55b349fa8d594da43b"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.710218 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.711274 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.711306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.711317 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.712781 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.712817 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.712856 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.712870 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f90e5216055a24666dff33956e08b8929e2de0616d99d0d15e7db232feeb030"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.720040 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474" exitCode=0 Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.720103 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.720129 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60818f0786d54e649dec69084c58536b293d0fad8f88fa039cad7515e98a80a8"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.720218 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.721360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.721387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.721395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.722614 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8cb17cbe24b71ff6f5b853ab63c110a99b27ea190cee4ca58dfe9b5845328d19" exitCode=0 Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.722661 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8cb17cbe24b71ff6f5b853ab63c110a99b27ea190cee4ca58dfe9b5845328d19"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.722860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12b7cb13d5b7b1102ee41ceea6879b3fb218c2b34b7c4c74bd22e5f75393c9f8"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.722987 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.723950 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.723979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.723988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.725577 4827 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706" exitCode=0 Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.725620 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.725682 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f88b6fac29c2be59ffc794d069348aebebd7ded331d80afb14fcb9568123c6a"} Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.725764 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.726505 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.726536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.726546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.728189 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.729087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.729128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:13 crc kubenswrapper[4827]: I0126 09:06:13.729138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:14 crc kubenswrapper[4827]: W0126 09:06:14.355126 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 26 09:06:14 crc kubenswrapper[4827]: E0126 09:06:14.355208 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.561241 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:29:32.41915212 +0000 UTC Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.731543 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.731596 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.731614 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.731720 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.732628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.732688 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.732700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.734456 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.734476 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.735258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.735289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.735300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.737131 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.737166 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.737183 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.737194 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.738797 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dcca98729d8b56435c3b1faaea918597871ca069a8645f8882e26be4c4190502" exitCode=0 Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.738850 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dcca98729d8b56435c3b1faaea918597871ca069a8645f8882e26be4c4190502"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.738947 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.739956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.739983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.740001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.742430 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97"} Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.742512 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.743873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.743911 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:14 crc kubenswrapper[4827]: I0126 09:06:14.743936 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.157840 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.160904 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.160974 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.160987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.161024 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.561910 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:06:50.509040677 +0000 UTC Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.746003 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="64cf5b9b3f5ca1307a89e19761997d0b02b0961d5137465bb8176eab22123fb2" exitCode=0 Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.746071 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"64cf5b9b3f5ca1307a89e19761997d0b02b0961d5137465bb8176eab22123fb2"} Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.746215 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d"} Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747894 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747913 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747945 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.747920 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.748822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.748840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.748849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.749285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.749314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.749322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.752285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.752326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:15 crc kubenswrapper[4827]: I0126 09:06:15.752335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.562821 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:50:32.627260379 +0000 UTC Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753140 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"010900adbc7073bab0a28559f244af3a09cc1996e2c67f5e60d9368b01f33205"} Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753182 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753188 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b2e41145c21f287ae1a6756fde18f8ad8314ef91a4b843c4c79143db0795e110"} Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753203 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1941d9774292127e340e6616fd6539a0ea0d25a51c0dfe807f4e040017f0392"} Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753215 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753290 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753214 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b0c55a54cc818755e8acbc8be6171a592ff8433f835060e0c2ac46e5aeb94fab"} Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"95d6e3def1bb7db4633b9c429e2349f1666da27b1dcd8e8376cff675883f4cd3"} Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.753997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.754008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.754244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.754292 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:16 crc kubenswrapper[4827]: I0126 09:06:16.754303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.298589 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.298768 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.299811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.299861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.299875 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.405971 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.524311 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.563190 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:42:02.631971681 +0000 UTC Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.755543 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.755597 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.756976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.757011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.757025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.757120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.757192 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.757217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:17 crc kubenswrapper[4827]: I0126 09:06:17.894816 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.563297 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:56:45.327151133 +0000 UTC Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.757898 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.758865 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.758910 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.758919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.801141 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.801431 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.802976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.803468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:18 crc kubenswrapper[4827]: I0126 09:06:18.803489 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.371581 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.536849 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.537036 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.538280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.538320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.538332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.563920 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:38:03.054466352 +0000 UTC Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.740988 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.760891 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.760936 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762689 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:19 crc kubenswrapper[4827]: I0126 09:06:19.762713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:20 crc kubenswrapper[4827]: I0126 09:06:20.564729 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:03:09.720426636 +0000 UTC Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.045716 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.045998 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.047542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.047585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.047602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:21 crc kubenswrapper[4827]: I0126 09:06:21.565283 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:32:42.937950035 +0000 UTC Jan 26 09:06:22 crc kubenswrapper[4827]: E0126 09:06:22.049177 4827 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.565428 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:21:22.404329596 +0000 UTC Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.594398 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.595059 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.596699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.596919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.597033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.599678 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.769204 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.770482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.770518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:22 crc kubenswrapper[4827]: I0126 09:06:22.770526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.566131 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:41:43.911210373 +0000 UTC Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.690043 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.771313 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.773126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.773167 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.773180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:23 crc kubenswrapper[4827]: I0126 09:06:23.778053 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.553493 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.566763 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:33:27.645506158 +0000 UTC Jan 26 09:06:24 crc kubenswrapper[4827]: E0126 09:06:24.567994 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 09:06:24 crc kubenswrapper[4827]: W0126 09:06:24.579388 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.579468 4827 trace.go:236] Trace[1344891810]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 09:06:14.578) (total time: 10001ms): Jan 26 09:06:24 crc kubenswrapper[4827]: Trace[1344891810]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:06:24.579) Jan 26 09:06:24 crc kubenswrapper[4827]: Trace[1344891810]: [10.00124069s] [10.00124069s] END Jan 26 09:06:24 crc kubenswrapper[4827]: E0126 09:06:24.579487 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.773174 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.773997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.774036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.774046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:24 crc kubenswrapper[4827]: W0126 09:06:24.960142 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 09:06:24 crc kubenswrapper[4827]: I0126 09:06:24.960228 4827 trace.go:236] Trace[2055190109]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 09:06:14.958) (total time: 10001ms): Jan 26 09:06:24 crc kubenswrapper[4827]: Trace[2055190109]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:06:24.960) Jan 26 09:06:24 crc kubenswrapper[4827]: Trace[2055190109]: [10.001695345s] [10.001695345s] END Jan 26 09:06:24 crc kubenswrapper[4827]: E0126 09:06:24.960249 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 09:06:25 crc kubenswrapper[4827]: E0126 09:06:25.162466 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 09:06:25 crc kubenswrapper[4827]: W0126 09:06:25.194915 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.195230 4827 trace.go:236] Trace[545542367]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 09:06:15.193) (total time: 10001ms): Jan 26 09:06:25 crc kubenswrapper[4827]: Trace[545542367]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:06:25.194) Jan 26 09:06:25 crc kubenswrapper[4827]: Trace[545542367]: [10.001451871s] [10.001451871s] END Jan 26 09:06:25 crc kubenswrapper[4827]: E0126 09:06:25.195405 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.278877 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.278955 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.284245 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.284541 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 09:06:25 crc kubenswrapper[4827]: I0126 09:06:25.567044 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:56:21.875122097 +0000 UTC Jan 26 09:06:26 crc kubenswrapper[4827]: I0126 09:06:26.568074 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:08:22.824267131 +0000 UTC Jan 26 09:06:26 crc kubenswrapper[4827]: I0126 09:06:26.690846 4827 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 09:06:26 crc kubenswrapper[4827]: I0126 09:06:26.690931 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.406776 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.406894 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.568540 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:11:34.719830736 +0000 UTC Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.924935 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.925112 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.926423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.926457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.926468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:27 crc kubenswrapper[4827]: I0126 09:06:27.940433 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.225716 4827 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.363406 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.364689 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.364760 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.364772 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.364797 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:28 crc kubenswrapper[4827]: E0126 09:06:28.368365 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.568930 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:58:32.748342575 +0000 UTC Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.783288 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.784460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.784523 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.784537 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.811773 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.812010 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.813182 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.813839 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.814092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.814269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.813300 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 09:06:28 crc kubenswrapper[4827]: I0126 09:06:28.819554 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.570492 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:57:08.154571045 +0000 UTC Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.785688 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.786122 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.786205 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.786695 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.786727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:29 crc kubenswrapper[4827]: I0126 09:06:29.786739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.022866 4827 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.282238 4827 trace.go:236] Trace[2141409821]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 09:06:18.523) (total time: 11758ms): Jan 26 09:06:30 crc kubenswrapper[4827]: Trace[2141409821]: ---"Objects listed" error: 11758ms (09:06:30.282) Jan 26 09:06:30 crc kubenswrapper[4827]: Trace[2141409821]: [11.758421447s] [11.758421447s] END Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.282263 4827 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.283069 4827 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.289705 4827 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.364312 4827 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.555253 4827 apiserver.go:52] "Watching apiserver" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.559725 4827 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.560239 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.560741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.560801 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.560965 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.561164 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.561326 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.561349 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.561454 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.561479 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.561687 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564510 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564690 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564762 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564766 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564696 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.564873 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.565000 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.565009 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.565402 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.570610 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:49:35.928862803 +0000 UTC Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.597753 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.619181 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.628702 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.639771 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.650685 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.660822 4827 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.661552 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.673414 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685181 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685226 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685248 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685268 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685291 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685314 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685332 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685350 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685367 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685388 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685405 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685425 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685487 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685508 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685546 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685573 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685613 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685657 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685680 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685701 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685725 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685772 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685796 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685820 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685843 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685870 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685863 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685894 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685923 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685940 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.685947 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686027 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686049 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686070 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686090 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686111 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686131 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686150 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686172 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686194 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686216 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686239 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686249 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686262 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686336 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686355 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686373 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686388 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686403 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686418 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686435 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686451 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686466 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686485 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686500 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686516 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686532 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686549 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686557 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686607 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686653 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686683 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686718 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686730 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686735 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686788 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686805 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686825 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686841 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686855 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686876 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686894 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686914 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686932 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686967 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686984 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687000 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687024 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687044 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687102 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687126 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687149 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687172 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687193 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687213 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687237 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687257 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687308 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687329 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687350 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687372 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687394 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687415 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687433 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687451 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687481 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687499 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687573 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687599 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687622 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687663 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687687 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687711 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687731 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687753 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687777 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687801 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687821 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687840 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687857 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687875 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687892 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687908 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687924 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687944 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687963 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687980 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688004 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688024 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688043 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688061 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688079 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688098 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688114 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688130 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688147 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688163 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688180 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690277 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690323 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690342 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690366 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690436 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690455 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690477 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690552 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690713 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690738 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690760 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690782 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690810 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691114 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691140 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691163 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691273 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691356 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691383 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691459 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691538 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691627 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691699 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691777 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691860 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691888 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691968 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691992 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692041 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692109 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692131 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692186 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692236 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692259 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692296 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692355 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692376 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692408 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692440 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692506 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692526 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692559 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692661 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692682 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692705 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692767 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692789 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692811 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692916 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692970 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693040 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693128 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693149 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693169 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693188 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693208 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693229 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693304 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693330 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693351 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693385 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693447 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693493 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693511 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693591 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693691 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693716 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693755 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693780 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693821 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693868 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693904 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693923 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693983 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694080 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694349 4827 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694369 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694380 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694391 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694402 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694432 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686856 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686888 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686929 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.686994 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687124 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687244 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687355 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687389 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687491 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687755 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687807 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.687838 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690206 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690229 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690497 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697544 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.690891 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691149 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691196 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691481 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691522 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691747 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691776 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691790 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.691938 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692149 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692299 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692467 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692562 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.692694 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693061 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693338 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693349 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693583 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693601 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.693611 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694395 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694743 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.694971 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.695072 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.695166 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.696114 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.696458 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.696620 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.696691 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697378 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697978 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697436 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697450 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.688198 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.697918 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.696891 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.698460 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.698549 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.698993 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.699019 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.699023 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.699092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.699909 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.700130 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.713182 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:31.213165321 +0000 UTC m=+19.861837140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703976 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.713395 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.713513 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702391 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702519 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702784 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702797 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703078 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703167 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703533 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703585 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703832 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.704218 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.704281 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718443 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.704808 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702733 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.703599 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.706331 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.706451 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.707016 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.707553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.708021 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.708417 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.708488 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.708988 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.709033 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.709232 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.709463 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.709802 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.712360 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.712892 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.713582 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.713805 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.713961 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.698995 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.714024 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:06:31.214007943 +0000 UTC m=+19.862679762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714069 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714072 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714078 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714308 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714179 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714624 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.714785 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715026 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715299 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715348 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715677 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.715956 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716062 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716292 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716456 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716610 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716671 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716833 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.716930 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717033 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717012 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717346 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717724 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717733 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717893 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.717915 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718261 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718285 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718463 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718611 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718103 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718824 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.718148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.719159 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.719177 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.719427 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.719706 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720098 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720182 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720239 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.706790 4827 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720607 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720752 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.721105 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.721212 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.721414 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720388 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720400 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720414 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720511 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.720529 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:31.220455439 +0000 UTC m=+19.869127248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.720574 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.721815 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.721817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.722237 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.702478 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.722008 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.722163 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.701432 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.722523 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.722538 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.722549 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.722817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.722972 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.723471 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.723500 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.723708 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.723744 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.723762 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.723918 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.723939 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.723952 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.724031 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:31.224012297 +0000 UTC m=+19.872684116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.724100 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.724461 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.724769 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725257 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725447 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725654 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725684 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.725823 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.701208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.726064 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: E0126 09:06:30.726122 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:31.225978359 +0000 UTC m=+19.874650178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.726421 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.726588 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.727201 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.728702 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.729051 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.729174 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.729233 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.729762 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.729809 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.733333 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.733926 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.737080 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.737473 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.738262 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.742409 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.746732 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.747046 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.754656 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.791267 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.793000 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d" exitCode=255 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.793040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d"} Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795266 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795305 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795323 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795368 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795502 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795521 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795584 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795621 4827 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795631 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795657 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795668 4827 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795708 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795760 4827 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795788 4827 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795797 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795806 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795818 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795851 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795931 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795947 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795960 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795974 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795986 4827 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.795999 4827 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796010 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796022 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796034 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796045 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796060 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796071 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796083 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796670 4827 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796691 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796729 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796738 4827 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796746 4827 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796755 4827 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796764 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796773 4827 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796782 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796791 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796801 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796811 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796820 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796828 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796836 4827 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796845 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796853 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.796863 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.797521 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.797668 4827 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.797767 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.797880 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.797991 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798097 4827 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798220 4827 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798335 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798452 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798568 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798724 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798762 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798775 4827 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798789 4827 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798802 4827 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798813 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798824 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798835 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798845 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798856 4827 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798949 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798966 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798983 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.798994 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.799009 4827 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.799021 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800727 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800741 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800755 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800769 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800779 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800790 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800801 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800812 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800825 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800836 4827 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800847 4827 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800860 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800872 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800885 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800897 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800909 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800921 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800933 4827 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800945 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800958 4827 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800970 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800984 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.800995 4827 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801006 4827 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801019 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801031 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801043 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801055 4827 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801067 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801079 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801091 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801105 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801118 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801130 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801143 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801155 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801168 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801179 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801192 4827 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801205 4827 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801218 4827 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801230 4827 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801241 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801253 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801264 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801274 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801284 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801300 4827 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801310 4827 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801321 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801331 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801342 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801352 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801364 4827 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801373 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801384 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801396 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801407 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801419 4827 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801430 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801443 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801454 4827 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801464 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801474 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801484 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801495 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801505 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801515 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801525 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801536 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801547 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801558 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801581 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801596 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801605 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801617 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801627 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801654 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801665 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801678 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801688 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801699 4827 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801710 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801720 4827 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801730 4827 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801740 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801749 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801761 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801775 4827 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801786 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801796 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801808 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801817 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801829 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801840 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801850 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801860 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801870 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801882 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801900 4827 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801912 4827 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801923 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801934 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801945 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801955 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801967 4827 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801978 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801988 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.801998 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802009 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802019 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802033 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802045 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802056 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802069 4827 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802079 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.802093 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.811356 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.813922 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.814278 4827 scope.go:117] "RemoveContainer" containerID="eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.831087 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.877046 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.877190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.892057 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.898081 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 09:06:30 crc kubenswrapper[4827]: W0126 09:06:30.915839 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-0b2ae5d2a376a0b81faf1e0194a941ca00b7985263c5ec5b655e795186e1d228 WatchSource:0}: Error finding container 0b2ae5d2a376a0b81faf1e0194a941ca00b7985263c5ec5b655e795186e1d228: Status 404 returned error can't find the container with id 0b2ae5d2a376a0b81faf1e0194a941ca00b7985263c5ec5b655e795186e1d228 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.921248 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.932898 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:30 crc kubenswrapper[4827]: W0126 09:06:30.934766 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b2f6eddc8f7c01389f11b3699addf31fd57a23d09f0169fc7ec95f25c57169b1 WatchSource:0}: Error finding container b2f6eddc8f7c01389f11b3699addf31fd57a23d09f0169fc7ec95f25c57169b1: Status 404 returned error can't find the container with id b2f6eddc8f7c01389f11b3699addf31fd57a23d09f0169fc7ec95f25c57169b1 Jan 26 09:06:30 crc kubenswrapper[4827]: I0126 09:06:30.946994 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.304882 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.304941 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.304966 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.304987 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.305005 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305098 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305145 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:32.305131428 +0000 UTC m=+20.953803247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305466 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305481 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305494 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305522 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:32.305512998 +0000 UTC m=+20.954184817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305570 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305583 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305591 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305634 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:32.30560775 +0000 UTC m=+20.954279579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305702 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.305728 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:32.305719923 +0000 UTC m=+20.954391732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.306531 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:06:32.306507546 +0000 UTC m=+20.955179395 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.570779 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:50:24.308281542 +0000 UTC Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.702058 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:31 crc kubenswrapper[4827]: E0126 09:06:31.702193 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.705695 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.706408 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.707440 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.708265 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.709836 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.710399 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.711476 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.712153 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.713281 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.713903 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.714954 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.715753 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.716483 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.716837 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.717387 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.718049 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.719036 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.719754 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.720686 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.721482 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.722188 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.723402 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.724046 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.724576 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.725682 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.726158 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.727213 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.727989 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.728892 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.729123 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.729629 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.733669 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.734169 4827 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.734267 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.735559 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.736057 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.736442 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.737772 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.738527 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.739061 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.741557 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.743069 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.744106 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.745843 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.747348 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.748229 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.749348 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.750079 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.751410 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.752528 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.753719 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.754306 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.755074 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.756310 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.757047 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.758249 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.793290 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.795378 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.795443 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"995c45cfd9d1b5c0ca38a074248776183b42539273f0c3d2fa6483ed76e20027"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.797204 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.798688 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.798951 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.800159 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.800219 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.800233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2f6eddc8f7c01389f11b3699addf31fd57a23d09f0169fc7ec95f25c57169b1"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.800859 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0b2ae5d2a376a0b81faf1e0194a941ca00b7985263c5ec5b655e795186e1d228"} Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.822340 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.852818 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.864973 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.876999 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.888216 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.900377 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.917145 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.936291 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.953268 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.970004 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:31 crc kubenswrapper[4827]: I0126 09:06:31.984111 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.312127 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.312199 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312224 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:06:34.312205929 +0000 UTC m=+22.960877748 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.312248 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.312299 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.312330 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312290 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312414 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312431 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312444 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312417 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:34.312409644 +0000 UTC m=+22.961081463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312483 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:34.312473606 +0000 UTC m=+22.961145425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312371 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312507 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:34.312500247 +0000 UTC m=+22.961172066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312366 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312520 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312527 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.312545 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:34.312539798 +0000 UTC m=+22.961211617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.571560 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:27:57.675800924 +0000 UTC Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.701898 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:32 crc kubenswrapper[4827]: I0126 09:06:32.701982 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.702026 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:32 crc kubenswrapper[4827]: E0126 09:06:32.702139 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.572303 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:40:14.776292495 +0000 UTC Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.697563 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.702044 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:33 crc kubenswrapper[4827]: E0126 09:06:33.702193 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.708416 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.712341 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.722456 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.740774 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.761771 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.784405 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.807143 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.810098 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718"} Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.826966 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.849263 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.865436 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.891042 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.912976 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.930027 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.944184 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.962772 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.978123 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:33 crc kubenswrapper[4827]: I0126 09:06:33.992011 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.330789 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.330924 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.330978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.331029 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.331083 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331154 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331198 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:06:38.33111928 +0000 UTC m=+26.979791149 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331233 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331297 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:38.331276314 +0000 UTC m=+26.979948173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331318 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331330 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331373 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331433 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:38.331400938 +0000 UTC m=+26.980072947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331471 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:38.331453239 +0000 UTC m=+26.980125308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331593 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331630 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331686 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.331756 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:38.331733818 +0000 UTC m=+26.980405817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.572717 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:45:56.161826496 +0000 UTC Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.702027 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.702117 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.702183 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.702269 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.768974 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.771139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.771168 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.771176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.771228 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.778730 4827 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.779095 4827 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.780414 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.780448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.780457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.780472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.780481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.806315 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.817492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.817549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.817562 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.817580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.817655 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.851660 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.861098 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.861156 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.861169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.861183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.861193 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.879050 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.881938 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.881991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.882003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.882016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.882027 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.892595 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.895478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.895501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.895509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.895521 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.895528 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.905113 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:34 crc kubenswrapper[4827]: E0126 09:06:34.905216 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.906330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.906394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.906404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.906418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:34 crc kubenswrapper[4827]: I0126 09:06:34.906430 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:34Z","lastTransitionTime":"2026-01-26T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.010257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.010297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.010308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.010325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.010334 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.113110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.113164 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.113176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.113194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.113206 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.217452 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.217501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.217508 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.217524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.217534 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.321203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.321284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.321308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.321330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.321344 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.424185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.424235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.424252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.424278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.424297 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.527930 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.528293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.528531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.528791 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.528994 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.574128 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:34:25.586967594 +0000 UTC Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.632010 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.632376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.632618 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.632906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.633108 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.702746 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:35 crc kubenswrapper[4827]: E0126 09:06:35.703427 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.735172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.735229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.735239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.735264 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.735276 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.838319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.838365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.838376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.838398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.838409 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.940910 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.940944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.940954 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.940967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:35 crc kubenswrapper[4827]: I0126 09:06:35.940977 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:35Z","lastTransitionTime":"2026-01-26T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.043500 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.043553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.043564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.043584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.043597 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.145775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.145811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.145820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.145834 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.145844 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.247692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.247723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.247733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.247746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.247755 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.269426 4827 csr.go:261] certificate signing request csr-t88rz is approved, waiting to be issued Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.281929 4827 csr.go:257] certificate signing request csr-t88rz is issued Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.349504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.349535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.349543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.349560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.349570 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.452749 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.452790 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.452800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.452814 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.452826 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.554902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.554948 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.554958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.554973 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.554983 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.575308 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:53:36.216345705 +0000 UTC Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.657037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.657078 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.657090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.657107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.657119 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.702160 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:36 crc kubenswrapper[4827]: E0126 09:06:36.702310 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.702588 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:36 crc kubenswrapper[4827]: E0126 09:06:36.702815 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.759629 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.759973 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.760091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.760264 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.760378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.766753 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k9x8x"] Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.767188 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-v7qpk"] Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.767362 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qmzjr"] Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.767528 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.767679 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.767866 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.768202 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cbqrj"] Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.768747 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.771604 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.771691 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.771708 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.772720 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.772740 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.772771 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.772788 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773024 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773066 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773073 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773095 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773230 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773297 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.773370 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.778926 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.795618 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.819391 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.836508 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.852278 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-cnibin\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.852511 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-multus\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.852596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzv4\" (UniqueName: \"kubernetes.io/projected/ef39dc20-499c-4665-9555-481361ffe06d-kube-api-access-7rzv4\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.852705 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.852777 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-multus-certs\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853153 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-conf-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853222 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-daemon-config\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krbhj\" (UniqueName: \"kubernetes.io/projected/d7e37ec5-8c72-432d-9809-ac670c707671-kube-api-access-krbhj\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853435 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-bin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853519 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef39dc20-499c-4665-9555-481361ffe06d-proxy-tls\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853625 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-system-cni-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853720 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853797 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-etc-kubernetes\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853934 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cni-binary-copy\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.853974 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6n4z\" (UniqueName: \"kubernetes.io/projected/b871a59f-4896-4609-806e-7255dd7708b8-kube-api-access-x6n4z\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854019 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-os-release\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854037 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-system-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854073 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-socket-dir-parent\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854061 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854088 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef39dc20-499c-4665-9555-481361ffe06d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854247 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5s4\" (UniqueName: \"kubernetes.io/projected/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-kube-api-access-wn5s4\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854283 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b871a59f-4896-4609-806e-7255dd7708b8-hosts-file\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854323 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-kubelet\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854347 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-netns\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854370 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cnibin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854392 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-k8s-cni-cncf-io\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854415 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-os-release\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854434 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-hostroot\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854457 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef39dc20-499c-4665-9555-481361ffe06d-rootfs\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.854477 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-binary-copy\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.861835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.862063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.862128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.862203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.862260 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.869202 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.886950 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.900201 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.912421 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.923140 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.936303 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.955582 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cni-binary-copy\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.955929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6n4z\" (UniqueName: \"kubernetes.io/projected/b871a59f-4896-4609-806e-7255dd7708b8-kube-api-access-x6n4z\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956036 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-os-release\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956147 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-system-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956252 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-socket-dir-parent\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956349 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef39dc20-499c-4665-9555-481361ffe06d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956464 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5s4\" (UniqueName: \"kubernetes.io/projected/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-kube-api-access-wn5s4\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b871a59f-4896-4609-806e-7255dd7708b8-hosts-file\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956655 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b871a59f-4896-4609-806e-7255dd7708b8-hosts-file\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956172 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-os-release\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956363 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-socket-dir-parent\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-system-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956874 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-kubelet\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.956964 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-kubelet\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cni-binary-copy\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-netns\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957192 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef39dc20-499c-4665-9555-481361ffe06d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957155 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-netns\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957381 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-k8s-cni-cncf-io\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957475 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cnibin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957587 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-os-release\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957717 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-hostroot\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957820 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef39dc20-499c-4665-9555-481361ffe06d-rootfs\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957911 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef39dc20-499c-4665-9555-481361ffe06d-rootfs\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957529 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-cnibin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957753 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-hostroot\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957488 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-k8s-cni-cncf-io\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-binary-copy\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957722 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-os-release\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.957998 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-cnibin\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-multus\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958063 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzv4\" (UniqueName: \"kubernetes.io/projected/ef39dc20-499c-4665-9555-481361ffe06d-kube-api-access-7rzv4\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958082 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-cnibin\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958090 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958098 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-multus\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-multus-certs\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958148 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-conf-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-daemon-config\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958200 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-conf-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958203 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958227 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krbhj\" (UniqueName: \"kubernetes.io/projected/d7e37ec5-8c72-432d-9809-ac670c707671-kube-api-access-krbhj\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958251 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-bin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef39dc20-499c-4665-9555-481361ffe06d-proxy-tls\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958279 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-cni-dir\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958299 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-system-cni-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958320 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-var-lib-cni-bin\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958327 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958353 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-etc-kubernetes\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-etc-kubernetes\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958438 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-system-cni-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958182 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-host-run-multus-certs\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958679 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-multus-daemon-config\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.958862 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d7e37ec5-8c72-432d-9809-ac670c707671-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.959507 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.960123 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d7e37ec5-8c72-432d-9809-ac670c707671-cni-binary-copy\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.964801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.964836 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.964851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.964865 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.964876 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:36Z","lastTransitionTime":"2026-01-26T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.969618 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef39dc20-499c-4665-9555-481361ffe06d-proxy-tls\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:36 crc kubenswrapper[4827]: I0126 09:06:36.989134 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:36.992181 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6n4z\" (UniqueName: \"kubernetes.io/projected/b871a59f-4896-4609-806e-7255dd7708b8-kube-api-access-x6n4z\") pod \"node-resolver-qmzjr\" (UID: \"b871a59f-4896-4609-806e-7255dd7708b8\") " pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:36.996049 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzv4\" (UniqueName: \"kubernetes.io/projected/ef39dc20-499c-4665-9555-481361ffe06d-kube-api-access-7rzv4\") pod \"machine-config-daemon-k9x8x\" (UID: \"ef39dc20-499c-4665-9555-481361ffe06d\") " pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.000262 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krbhj\" (UniqueName: \"kubernetes.io/projected/d7e37ec5-8c72-432d-9809-ac670c707671-kube-api-access-krbhj\") pod \"multus-additional-cni-plugins-cbqrj\" (UID: \"d7e37ec5-8c72-432d-9809-ac670c707671\") " pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.009297 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5s4\" (UniqueName: \"kubernetes.io/projected/e83a7bed-4909-4830-89e5-13c9a0bfcaf6-kube-api-access-wn5s4\") pod \"multus-v7qpk\" (UID: \"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\") " pod="openshift-multus/multus-v7qpk" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.021833 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.042329 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.068038 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.068074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.068082 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.068095 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.068104 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.072407 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.083001 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v7qpk" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.091214 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qmzjr" Jan 26 09:06:37 crc kubenswrapper[4827]: W0126 09:06:37.095295 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode83a7bed_4909_4830_89e5_13c9a0bfcaf6.slice/crio-097b9f4e31ab5cb3cf20d90b704143145491e085210f0cfd1151b85415524fe7 WatchSource:0}: Error finding container 097b9f4e31ab5cb3cf20d90b704143145491e085210f0cfd1151b85415524fe7: Status 404 returned error can't find the container with id 097b9f4e31ab5cb3cf20d90b704143145491e085210f0cfd1151b85415524fe7 Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.103054 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.109974 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.132040 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.176868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.176915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.176925 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.176942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.176952 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.188492 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.198326 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9xkm"] Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.199074 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.201556 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.201824 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.205291 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.205307 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.205581 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.205749 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.205888 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.212754 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.238137 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.253966 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262260 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262423 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262447 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gss4q\" (UniqueName: \"kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262502 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262527 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262590 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262611 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262666 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262693 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262712 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262869 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262930 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.262956 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.263021 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.263079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.263110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.263133 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.278888 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287605 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 09:01:36 +0000 UTC, rotation deadline is 2026-11-29 11:02:25.561656576 +0000 UTC Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287669 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7369h55m48.273989547s for next certificate rotation Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.287821 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.317216 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.339656 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.361766 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364775 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gss4q\" (UniqueName: \"kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364793 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364813 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364847 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364870 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364889 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364908 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364924 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364976 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.364997 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365017 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365048 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365067 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365104 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365123 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365142 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365210 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365257 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365286 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365569 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365604 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365652 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365689 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365754 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365821 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365854 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.365887 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.367045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.367124 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.367679 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.367732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.368163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.374973 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.396437 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.400834 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.400867 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.400876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.400892 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.400901 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.411291 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gss4q\" (UniqueName: \"kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q\") pod \"ovnkube-node-q9xkm\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.419057 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.432538 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.470143 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.492928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.502435 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.502472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.502482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.502499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.502510 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.507442 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.521549 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.539395 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.541853 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:37 crc kubenswrapper[4827]: W0126 09:06:37.551901 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ba16376_c20a_411b_b45a_d7e718fbbac0.slice/crio-732af990ba015c7db066924bb7b311eddcd6fea77089bb0880ce342ded80684e WatchSource:0}: Error finding container 732af990ba015c7db066924bb7b311eddcd6fea77089bb0880ce342ded80684e: Status 404 returned error can't find the container with id 732af990ba015c7db066924bb7b311eddcd6fea77089bb0880ce342ded80684e Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.554275 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.573119 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.575599 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:49:45.331554709 +0000 UTC Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.588178 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.604289 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.605210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.605240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.605251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.605268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.605279 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.702548 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:37 crc kubenswrapper[4827]: E0126 09:06:37.702688 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.707286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.707333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.707344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.707361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.707373 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.810132 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.810165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.810176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.810192 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.810204 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.824177 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.824220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.824230 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"4280396d92e8fa07f94f8793d2bed40af4849e43e0f0b732d3cf545f41efb8a3"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.826134 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qmzjr" event={"ID":"b871a59f-4896-4609-806e-7255dd7708b8","Type":"ContainerStarted","Data":"d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.826195 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qmzjr" event={"ID":"b871a59f-4896-4609-806e-7255dd7708b8","Type":"ContainerStarted","Data":"08935b24a0f2533339cbdb947b543e306ab01d839e5d1c31bf14849f469f1976"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.827571 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerStarted","Data":"87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.827607 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerStarted","Data":"097b9f4e31ab5cb3cf20d90b704143145491e085210f0cfd1151b85415524fe7"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.829534 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" exitCode=0 Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.829604 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.829631 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"732af990ba015c7db066924bb7b311eddcd6fea77089bb0880ce342ded80684e"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.831918 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f" exitCode=0 Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.831954 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.831979 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerStarted","Data":"84bdabac232355d3d328ec05b2c727864869250e4ac1e19353d857713d14c8ab"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.858550 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.873322 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.886993 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.907801 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.912527 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.912559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.912567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.912580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.912591 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:37Z","lastTransitionTime":"2026-01-26T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.921491 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.936564 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.946443 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.962915 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:37 crc kubenswrapper[4827]: I0126 09:06:37.998125 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:37Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.017620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.017696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.017708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.017723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.017733 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.020587 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.034352 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.045761 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.057407 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.068812 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.080750 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.093687 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.109120 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.119943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.119963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.119972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.119983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.119992 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.125388 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.144202 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.157829 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.169389 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.182448 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.196878 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.210371 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.221902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.221940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.221953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.221970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.221992 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.223013 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.237120 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.328900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.328952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.328962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.328980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.328990 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.375816 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376018 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:06:46.375992294 +0000 UTC m=+35.024664113 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.376245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.376275 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.376302 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.376340 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376331 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376474 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376498 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376514 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376433 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376536 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376548 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376516 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:46.376491548 +0000 UTC m=+35.025163367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376596 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:46.37658386 +0000 UTC m=+35.025255669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376616 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:46.376606421 +0000 UTC m=+35.025278460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376892 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.376950 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:06:46.376938331 +0000 UTC m=+35.025610340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.432900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.432936 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.432947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.432962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.432972 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.535442 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.535481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.535492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.535509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.535520 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.576093 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:41:45.969857998 +0000 UTC Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.640899 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.640939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.640952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.640969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.640981 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.702515 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.702903 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.703190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:38 crc kubenswrapper[4827]: E0126 09:06:38.703271 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.742870 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.742905 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.742925 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.742942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.742953 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.795692 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qn5kf"] Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.797079 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.805119 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.805718 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.805897 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.806054 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.818742 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.830260 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.841938 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844414 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844455 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844463 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844471 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844545 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844558 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844576 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.844584 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.846680 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825" exitCode=0 Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.846702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.853698 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.868051 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.882322 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4d1d479-6214-447e-95c4-b563700234d0-serviceca\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.882392 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg59w\" (UniqueName: \"kubernetes.io/projected/a4d1d479-6214-447e-95c4-b563700234d0-kube-api-access-fg59w\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.882442 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4d1d479-6214-447e-95c4-b563700234d0-host\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.882736 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.895433 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.909191 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.921403 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.940819 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.946305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.946337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.946346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.946362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.946371 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:38Z","lastTransitionTime":"2026-01-26T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.953708 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.964907 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.981875 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.983193 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg59w\" (UniqueName: \"kubernetes.io/projected/a4d1d479-6214-447e-95c4-b563700234d0-kube-api-access-fg59w\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.983237 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4d1d479-6214-447e-95c4-b563700234d0-host\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.983267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4d1d479-6214-447e-95c4-b563700234d0-serviceca\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.983814 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4d1d479-6214-447e-95c4-b563700234d0-host\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.984118 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4d1d479-6214-447e-95c4-b563700234d0-serviceca\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:38 crc kubenswrapper[4827]: I0126 09:06:38.994991 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:38Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.003729 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg59w\" (UniqueName: \"kubernetes.io/projected/a4d1d479-6214-447e-95c4-b563700234d0-kube-api-access-fg59w\") pod \"node-ca-qn5kf\" (UID: \"a4d1d479-6214-447e-95c4-b563700234d0\") " pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.012831 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.023649 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.035129 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.045357 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.048577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.048624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.048651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.048670 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.048683 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.057034 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.066339 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.080727 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.092196 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.106076 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.109370 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qn5kf" Jan 26 09:06:39 crc kubenswrapper[4827]: W0126 09:06:39.125839 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4d1d479_6214_447e_95c4_b563700234d0.slice/crio-2a841e3a2addc0733740a0238f54e23904005238186ac757b56652036952cb6c WatchSource:0}: Error finding container 2a841e3a2addc0733740a0238f54e23904005238186ac757b56652036952cb6c: Status 404 returned error can't find the container with id 2a841e3a2addc0733740a0238f54e23904005238186ac757b56652036952cb6c Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.126059 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.139180 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.151305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.151350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.151362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.151379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.151392 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.153975 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.167732 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.179694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.256183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.256227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.256237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.256256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.256266 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.358817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.359151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.359165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.359182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.359200 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.461315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.461361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.461373 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.461389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.461401 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.564001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.564043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.564054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.564071 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.564090 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.576692 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:45:04.566552064 +0000 UTC Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.666891 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.666933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.666941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.666957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.666967 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.702116 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:39 crc kubenswrapper[4827]: E0126 09:06:39.702256 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.769913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.769982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.770001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.770023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.770038 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.852962 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c" exitCode=0 Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.852997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.854679 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qn5kf" event={"ID":"a4d1d479-6214-447e-95c4-b563700234d0","Type":"ContainerStarted","Data":"650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.854709 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qn5kf" event={"ID":"a4d1d479-6214-447e-95c4-b563700234d0","Type":"ContainerStarted","Data":"2a841e3a2addc0733740a0238f54e23904005238186ac757b56652036952cb6c"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.858423 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.872766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.872795 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.872804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.872821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.872832 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.882193 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.900972 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.917014 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.931134 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.943445 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.959172 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.972291 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.978922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.978975 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.978989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.979002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.979011 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:39Z","lastTransitionTime":"2026-01-26T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.986002 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:39 crc kubenswrapper[4827]: I0126 09:06:39.998448 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.011031 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.026193 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.040877 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.053961 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.067023 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.081279 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.081975 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.082016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.082024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.082040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.082049 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.092891 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.106379 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.120243 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.136029 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.149803 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.160497 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.172497 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.183579 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.184566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.184596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.184604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.184616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.184626 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.196573 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.207991 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.218092 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.227410 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.243531 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.287180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.287248 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.287257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.287272 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.287282 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.389667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.389712 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.389722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.389737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.389746 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.492241 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.492291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.492305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.492323 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.492335 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.577566 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:47:34.351853129 +0000 UTC Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.594518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.594565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.594575 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.594591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.594602 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.697137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.697203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.697220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.697245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.697281 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.701764 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.701846 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:40 crc kubenswrapper[4827]: E0126 09:06:40.701980 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:40 crc kubenswrapper[4827]: E0126 09:06:40.702075 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.800800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.800849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.800861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.800881 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.800899 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.864987 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3" exitCode=0 Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.865058 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.879183 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.896057 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.907078 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.907128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.907141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.907161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.907173 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:40Z","lastTransitionTime":"2026-01-26T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.910570 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.920221 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.930005 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.937925 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.956304 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.968183 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.980174 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:40 crc kubenswrapper[4827]: I0126 09:06:40.992621 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:40Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009193 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009837 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009887 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.009897 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.021280 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.035475 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.045411 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.112281 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.112314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.112325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.112341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.112351 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.215117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.215162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.215178 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.215201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.215218 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.317963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.318001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.318009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.318024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.318034 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.419992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.420036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.420046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.420061 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.420074 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.482827 4827 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.522392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.522452 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.522473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.522506 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.522529 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.578342 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:29:25.154174619 +0000 UTC Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.626074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.626112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.626126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.626145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.626159 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.702313 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:41 crc kubenswrapper[4827]: E0126 09:06:41.702509 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.721500 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.732467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.732510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.732526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.732544 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.732559 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.737277 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.751294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.762520 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.791848 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.803412 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.819344 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.834671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.834935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.835041 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.835151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.835234 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.835133 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.847779 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.857881 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.870943 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.871247 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.873653 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d" exitCode=0 Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.873723 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.883335 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.894417 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.906910 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.922446 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.934589 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.939873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.939910 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.939921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.939938 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.939947 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:41Z","lastTransitionTime":"2026-01-26T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.949745 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.960403 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.971621 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.982596 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:41 crc kubenswrapper[4827]: I0126 09:06:41.992843 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.002722 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.012156 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.020068 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.038362 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.041428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.041455 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.041463 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.041475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.041483 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.051788 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.061436 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.072458 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.144233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.144266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.144277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.144289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.144298 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.246371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.246426 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.246434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.246445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.246454 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.349071 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.349129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.349141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.349157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.349168 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.453016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.453085 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.453102 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.453125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.453142 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.555718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.555771 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.555782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.555800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.555814 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.579155 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:10:14.298735687 +0000 UTC Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.657948 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.658005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.658024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.658046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.658058 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.702525 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.702560 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:42 crc kubenswrapper[4827]: E0126 09:06:42.702698 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:42 crc kubenswrapper[4827]: E0126 09:06:42.702860 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.760115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.760151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.760163 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.760180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.760192 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.862493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.862526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.862539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.862555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.862566 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.884064 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7e37ec5-8c72-432d-9809-ac670c707671" containerID="fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80" exitCode=0 Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.884189 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerDied","Data":"fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.901814 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.914543 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.928840 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.942100 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.950829 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.964301 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.966493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.966522 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.966530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.966543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.966552 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:42Z","lastTransitionTime":"2026-01-26T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:42 crc kubenswrapper[4827]: I0126 09:06:42.977061 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.036418 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.048399 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.057256 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.067999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.068025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.068033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.068045 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.068053 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.076252 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.086937 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.105059 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.120103 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.171742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.171773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.171783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.171799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.171808 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.274530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.274567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.274578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.274602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.274614 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.377722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.377767 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.377778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.377794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.377805 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.481011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.481260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.481334 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.481409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.481475 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.698022 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:40:47.080265974 +0000 UTC Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.699540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.699576 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.699590 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.699608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.699621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.704179 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:43 crc kubenswrapper[4827]: E0126 09:06:43.704298 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.801433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.801464 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.801475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.801490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.801502 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.904045 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.904091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.904102 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.904117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:43 crc kubenswrapper[4827]: I0126 09:06:43.904128 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:43Z","lastTransitionTime":"2026-01-26T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.007103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.007146 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.007158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.007174 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.007186 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.109581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.109620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.109630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.109686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.109699 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.213114 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.213164 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.213179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.213215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.213227 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.315367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.315416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.315428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.315445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.315457 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.418451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.418487 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.418497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.418511 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.418520 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.521247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.521295 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.521307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.521324 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.521342 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.623724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.623758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.623766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.623779 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.623788 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.699224 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:07:07.694607024 +0000 UTC Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.702569 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.702589 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:44 crc kubenswrapper[4827]: E0126 09:06:44.702777 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:44 crc kubenswrapper[4827]: E0126 09:06:44.702901 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.726023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.726057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.726069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.726085 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.726095 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.828297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.828356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.828372 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.828396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.828414 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.897290 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" event={"ID":"d7e37ec5-8c72-432d-9809-ac670c707671","Type":"ContainerStarted","Data":"0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.902671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.903038 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.903063 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.917731 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.929619 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.930155 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.931534 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.931569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.931582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.931597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.931607 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:44Z","lastTransitionTime":"2026-01-26T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.939391 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.950968 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.963959 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.975911 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:44 crc kubenswrapper[4827]: I0126 09:06:44.995344 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.004864 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.015890 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.027044 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.033796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.033838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.033850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.033866 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.033877 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.041895 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.060277 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.072926 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.073856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.073893 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.073902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.073945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.073953 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.084278 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.084997 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.087903 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.087935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.087952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.087976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.087987 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.102806 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.102831 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.106808 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.106859 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.106873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.106903 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.106918 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.114723 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.117755 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.120810 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.120848 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.120860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.120878 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.120891 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.124857 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.137881 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.138738 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.141351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.141382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.141390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.141402 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.141412 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.149159 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.151678 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.151788 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.153261 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.153290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.153298 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.153312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.153320 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.167239 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.178002 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.188867 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.199560 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.212044 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.224371 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.236489 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.245367 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.255935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.255981 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.255991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.256007 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.256018 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.257977 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.269040 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.359293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.359320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.359328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.359340 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.359348 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.460987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.461020 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.461031 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.461045 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.461055 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.563731 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.563774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.563783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.563798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.563807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.665868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.665906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.665914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.665928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.665941 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.700111 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:13:34.472077173 +0000 UTC Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.702816 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:45 crc kubenswrapper[4827]: E0126 09:06:45.702947 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.768536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.768582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.768593 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.768610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.768621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.870516 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.870545 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.870556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.870571 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.870582 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.905411 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.972504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.972534 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.972543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.972554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:45 crc kubenswrapper[4827]: I0126 09:06:45.972563 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:45Z","lastTransitionTime":"2026-01-26T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.074287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.074335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.074347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.074363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.074375 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.176854 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.176890 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.176900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.176915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.176926 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.280476 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.280568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.280606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.280635 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.280711 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.315762 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.383829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.383872 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.383886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.383906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.383920 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.429117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429344 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:07:02.429307229 +0000 UTC m=+51.077979108 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.429414 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.429458 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.429495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.429562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429613 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429658 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429670 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429712 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429717 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:02.429702049 +0000 UTC m=+51.078373868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429827 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429838 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429842 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:02.429814192 +0000 UTC m=+51.078486051 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429846 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429887 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.429936 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:02.429916985 +0000 UTC m=+51.078588894 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.430039 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:02.429973867 +0000 UTC m=+51.078645726 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.486240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.486314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.486331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.486355 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.486374 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.589305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.589347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.589361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.589381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.589397 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.691318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.691385 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.691397 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.691434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.691448 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.700471 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:15:23.360887628 +0000 UTC Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.702853 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.702924 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.702984 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:46 crc kubenswrapper[4827]: E0126 09:06:46.703073 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.793800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.793851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.793862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.793878 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.793888 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.896097 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.896144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.896154 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.896172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.896182 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.909480 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/0.log" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.912020 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545" exitCode=1 Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.912051 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.912583 4827 scope.go:117] "RemoveContainer" containerID="87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.929343 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.956316 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:46Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953003 6044 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953042 6044 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953079 6044 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953125 6044 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953230 6044 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953332 6044 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953717 6044 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.954309 6044 ovnkube.go:599] Stopped ovnkube\\\\nI0126 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.969723 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.985614 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999012 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999051 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999061 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999089 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:46Z","lastTransitionTime":"2026-01-26T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:46 crc kubenswrapper[4827]: I0126 09:06:46.999239 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.013411 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.025238 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.038100 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.055666 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.065950 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.078688 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.090809 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.100668 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.100701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.100711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.100727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.100738 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.105153 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.116418 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.203691 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.203723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.203731 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.203746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.203754 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.305356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.305398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.305406 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.305420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.305430 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.407969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.408017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.408029 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.408046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.408058 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.410010 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.423754 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.434794 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.447374 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.458885 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.470948 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.483509 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.492148 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510598 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510630 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.510766 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:46Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953003 6044 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953042 6044 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953079 6044 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953125 6044 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953230 6044 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953332 6044 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953717 6044 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.954309 6044 ovnkube.go:599] Stopped ovnkube\\\\nI0126 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.523627 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.582218 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.616260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.616307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.616318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.616338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.616349 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.621865 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.660520 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.683205 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.700676 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:30:58.63067333 +0000 UTC Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.702016 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:47 crc kubenswrapper[4827]: E0126 09:06:47.702151 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.705050 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.718839 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.718901 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.718911 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.718926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.718938 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.820690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.820969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.820979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.820992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.821000 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.915838 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/1.log" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.916499 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/0.log" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.919046 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548" exitCode=1 Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.919080 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.919162 4827 scope.go:117] "RemoveContainer" containerID="87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.920082 4827 scope.go:117] "RemoveContainer" containerID="a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548" Jan 26 09:06:47 crc kubenswrapper[4827]: E0126 09:06:47.920284 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.925271 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.925323 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.925338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.925351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.925360 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:47Z","lastTransitionTime":"2026-01-26T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.941326 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.952014 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.965611 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.974260 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.984271 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:47 crc kubenswrapper[4827]: I0126 09:06:47.994063 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.005019 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.015110 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.023859 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.027404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.027436 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.027444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.027458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.027467 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.046693 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87208a7e8ac6992cad1c09112dd4ba2f55b018d30af22685e601a8bc3388e545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:46Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953003 6044 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953042 6044 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:06:45.953079 6044 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953125 6044 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953230 6044 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.953332 6044 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 09:06:45.953717 6044 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:06:45.954309 6044 ovnkube.go:599] Stopped ovnkube\\\\nI0126 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.061390 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.073995 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.087594 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.102108 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.129630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.129677 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.129686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.129698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.129707 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.231539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.231885 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.231959 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.232021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.232108 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.334812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.335065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.335159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.335271 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.335364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.438389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.438451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.438471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.438492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.438504 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.541987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.542404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.542597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.542796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.542952 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.645400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.645443 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.645454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.645470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.645481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.701245 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:33:38.092866172 +0000 UTC Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.702746 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:48 crc kubenswrapper[4827]: E0126 09:06:48.702890 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.702755 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:48 crc kubenswrapper[4827]: E0126 09:06:48.703476 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.773513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.773868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.773969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.774060 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.774151 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.877354 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.877398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.877410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.877426 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.877439 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.925713 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/1.log" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.930305 4827 scope.go:117] "RemoveContainer" containerID="a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548" Jan 26 09:06:48 crc kubenswrapper[4827]: E0126 09:06:48.930463 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.951175 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.965417 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.976193 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.980495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.980554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.980577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.980608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:48 crc kubenswrapper[4827]: I0126 09:06:48.980632 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:48Z","lastTransitionTime":"2026-01-26T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.001589 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.019988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.031672 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.043302 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.059228 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.071598 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.082918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.083058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.083139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.083205 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.083260 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.086936 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.103855 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.114415 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.127558 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.139064 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.185824 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.186043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.186140 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.186225 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.186321 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.231368 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr"] Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.231832 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.235966 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.236181 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.248107 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.263064 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.275932 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.285914 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.289097 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.289216 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.289293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.289369 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.289458 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.291666 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.291734 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.291788 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f9ee397-1413-403b-9884-232263b4ebe7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.291814 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjb8\" (UniqueName: \"kubernetes.io/projected/4f9ee397-1413-403b-9884-232263b4ebe7-kube-api-access-2tjb8\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.304821 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.317157 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.330052 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.342187 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.353557 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.367708 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.383996 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.396562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.396649 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.396697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f9ee397-1413-403b-9884-232263b4ebe7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.396736 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjb8\" (UniqueName: \"kubernetes.io/projected/4f9ee397-1413-403b-9884-232263b4ebe7-kube-api-access-2tjb8\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.397699 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.397982 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f9ee397-1413-403b-9884-232263b4ebe7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.398675 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.398729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.398741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.398761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.398773 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.401236 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.404976 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f9ee397-1413-403b-9884-232263b4ebe7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.418072 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.421954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjb8\" (UniqueName: \"kubernetes.io/projected/4f9ee397-1413-403b-9884-232263b4ebe7-kube-api-access-2tjb8\") pod \"ovnkube-control-plane-749d76644c-8srzr\" (UID: \"4f9ee397-1413-403b-9884-232263b4ebe7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.433625 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.446269 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.500811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.500842 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.500850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.500910 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.500922 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.544161 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.606376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.606450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.606464 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.608296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.608484 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.701973 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:26:30.733152825 +0000 UTC Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.702076 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:49 crc kubenswrapper[4827]: E0126 09:06:49.702159 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.710962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.710996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.711004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.711017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.711026 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.813387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.813434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.813446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.813462 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.813473 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.916065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.916106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.916117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.916133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.916143 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:49Z","lastTransitionTime":"2026-01-26T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:49 crc kubenswrapper[4827]: I0126 09:06:49.935202 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" event={"ID":"4f9ee397-1413-403b-9884-232263b4ebe7","Type":"ContainerStarted","Data":"593cbdaf212638ea5139b8a65bd69b5ed089a0377261a145911d66e3fc3e27c1"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.020000 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.020061 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.020075 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.020098 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.020118 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.124067 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.124138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.124157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.124185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.124202 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.236530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.236850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.237189 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.237514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.237686 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.344513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.344547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.344555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.344569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.344580 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.448232 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.448268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.448277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.448289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.448298 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.550389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.550427 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.550452 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.550470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.550481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.652733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.652763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.652772 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.652785 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.652794 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.702686 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:28:13.658906626 +0000 UTC Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.702821 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:50 crc kubenswrapper[4827]: E0126 09:06:50.702931 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.702815 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:50 crc kubenswrapper[4827]: E0126 09:06:50.703367 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.755497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.755547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.755563 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.755584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.755599 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.857949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.857992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.858003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.858018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.858029 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.940744 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" event={"ID":"4f9ee397-1413-403b-9884-232263b4ebe7","Type":"ContainerStarted","Data":"402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.940808 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" event={"ID":"4f9ee397-1413-403b-9884-232263b4ebe7","Type":"ContainerStarted","Data":"c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.955078 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.959945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.959986 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.959999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.960016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.960028 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:50Z","lastTransitionTime":"2026-01-26T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.974863 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:50 crc kubenswrapper[4827]: I0126 09:06:50.989045 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.001207 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.016560 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.033776 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.048554 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.063341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.063388 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.063401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.063418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.063429 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.065230 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.079245 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-k927z"] Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.079890 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.079970 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.082931 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.094470 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.108422 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.114337 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.114391 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng82w\" (UniqueName: \"kubernetes.io/projected/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-kube-api-access-ng82w\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.121504 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.133662 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.145322 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.157400 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.165717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.165757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.165768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.165790 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.165802 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.173084 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.183518 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.196917 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.212727 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.215204 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.215309 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng82w\" (UniqueName: \"kubernetes.io/projected/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-kube-api-access-ng82w\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.215347 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.215403 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:06:51.715387851 +0000 UTC m=+40.364059670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.228402 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.238864 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng82w\" (UniqueName: \"kubernetes.io/projected/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-kube-api-access-ng82w\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.253185 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.268135 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.269088 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.269123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.269136 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.269161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.269176 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.281761 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.297961 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.315850 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.330047 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.343937 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.358923 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.371437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.371499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.371513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.371531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.371544 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.381370 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.397220 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.412965 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.474587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.474633 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.474686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.474708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.474726 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.577495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.577555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.577567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.577584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.577597 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.680595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.680700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.680770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.680798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.680818 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.702794 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.702991 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.703124 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:54:57.894829051 +0000 UTC Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.720893 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.721383 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:51 crc kubenswrapper[4827]: E0126 09:06:51.721681 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:06:52.72162032 +0000 UTC m=+41.370292349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.723950 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.744024 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.771244 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.784475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.784520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.784530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.784552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.784565 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.789391 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.806064 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.824740 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.837545 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.856685 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.871653 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.884611 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.886817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.886868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.886880 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.886897 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.886917 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.912371 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.926420 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.940120 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.955583 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.966993 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.980609 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.989881 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.989918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.989930 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.989949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:51 crc kubenswrapper[4827]: I0126 09:06:51.989963 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:51Z","lastTransitionTime":"2026-01-26T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.092321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.092359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.092370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.092386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.092398 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.194940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.194986 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.194998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.195017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.195028 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.298279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.298330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.298342 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.298360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.298372 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.400859 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.400892 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.400900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.400913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.400923 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.503692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.503739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.503764 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.503788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.503803 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.607420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.607469 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.607490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.607517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.607533 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.702938 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.702938 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:52 crc kubenswrapper[4827]: E0126 09:06:52.703483 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:52 crc kubenswrapper[4827]: E0126 09:06:52.703504 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.702950 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:52 crc kubenswrapper[4827]: E0126 09:06:52.703906 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.703817 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:41:23.951489037 +0000 UTC Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.710305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.710351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.710364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.710381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.710395 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.732181 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:52 crc kubenswrapper[4827]: E0126 09:06:52.732335 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:52 crc kubenswrapper[4827]: E0126 09:06:52.732419 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:06:54.732399332 +0000 UTC m=+43.381071161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.813144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.813375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.813474 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.813594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.813722 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.917085 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.917311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.917393 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.917495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:52 crc kubenswrapper[4827]: I0126 09:06:52.917578 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:52Z","lastTransitionTime":"2026-01-26T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.019503 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.019553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.019565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.019583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.019595 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.122607 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.122684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.122696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.122712 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.122723 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.225472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.225542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.225558 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.225580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.225597 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.328165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.328197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.328205 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.328216 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.328224 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.431347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.431393 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.431403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.431420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.431431 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.533794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.533864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.533883 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.533947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.533969 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.637569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.637785 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.637807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.637875 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.637894 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.702471 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:53 crc kubenswrapper[4827]: E0126 09:06:53.702627 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.704669 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:57:29.755123254 +0000 UTC Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.740982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.741014 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.741022 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.741041 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.741049 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.845686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.845730 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.845747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.845766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.845775 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.949391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.949470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.949489 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.949540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:53 crc kubenswrapper[4827]: I0126 09:06:53.949550 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:53Z","lastTransitionTime":"2026-01-26T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.051879 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.051913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.051922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.051954 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.051964 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.155362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.155413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.155428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.155450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.155467 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.258364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.258400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.258410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.258425 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.258436 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.361462 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.361585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.361596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.361608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.361617 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.464302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.464362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.464380 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.464404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.464422 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.566487 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.566533 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.566546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.566567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.566582 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.670243 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.670768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.670987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.671159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.671321 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.702879 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.702914 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:54 crc kubenswrapper[4827]: E0126 09:06:54.703153 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:06:54 crc kubenswrapper[4827]: E0126 09:06:54.703258 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.702917 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:54 crc kubenswrapper[4827]: E0126 09:06:54.703890 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.705606 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:20:29.052923515 +0000 UTC Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.752415 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:54 crc kubenswrapper[4827]: E0126 09:06:54.752588 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:54 crc kubenswrapper[4827]: E0126 09:06:54.752675 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:06:58.75265863 +0000 UTC m=+47.401330449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.774139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.774182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.774190 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.774203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.774211 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.877967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.878013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.878027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.878044 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.878056 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.983321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.983423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.983487 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.983516 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:54 crc kubenswrapper[4827]: I0126 09:06:54.983579 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:54Z","lastTransitionTime":"2026-01-26T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.086766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.086847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.086871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.086902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.086922 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.187969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.188028 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.188040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.188060 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.188072 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.207240 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.212613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.212674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.212685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.212701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.212712 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.225371 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.234313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.234384 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.234394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.234410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.234424 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.248579 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.253244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.253277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.253287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.253303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.253314 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.274721 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.280337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.280411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.280428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.280449 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.280464 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.311024 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.311241 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.313217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.313267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.313280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.313302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.313315 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.416073 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.416115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.416126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.416142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.416153 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.518913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.518984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.519002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.519024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.519039 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.621257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.621511 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.621577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.621674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.621784 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.702243 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:55 crc kubenswrapper[4827]: E0126 09:06:55.702383 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.705744 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:38:24.242682485 +0000 UTC Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.724453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.724956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.725055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.725187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.725275 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.834122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.834166 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.834177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.834194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.834207 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.936932 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.937269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.937401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.937505 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:55 crc kubenswrapper[4827]: I0126 09:06:55.937595 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:55Z","lastTransitionTime":"2026-01-26T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.039956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.040034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.040058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.040087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.040110 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.142601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.142689 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.142700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.142716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.142727 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.246306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.246358 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.246370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.246389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.246406 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.348853 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.348896 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.348907 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.348922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.348944 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.451479 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.451520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.451533 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.451550 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.451563 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.553777 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.553835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.553900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.553928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.553993 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.656212 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.656258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.656266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.656277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.656286 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.701843 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.701883 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:56 crc kubenswrapper[4827]: E0126 09:06:56.701968 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.701997 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:56 crc kubenswrapper[4827]: E0126 09:06:56.702136 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:56 crc kubenswrapper[4827]: E0126 09:06:56.702222 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.706344 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:25:27.59991199 +0000 UTC Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.758914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.758945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.758956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.758969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.758976 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.861420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.861477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.861499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.861519 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.861533 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.963917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.963953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.963962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.963975 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:56 crc kubenswrapper[4827]: I0126 09:06:56.963986 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:56Z","lastTransitionTime":"2026-01-26T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.066145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.066194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.066207 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.066224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.066236 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.168787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.168847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.168859 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.168882 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.168903 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.271365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.271430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.271448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.271471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.271487 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.373434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.373467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.373478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.373490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.373500 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.475900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.475952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.475965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.475982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.475994 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.578187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.578240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.578249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.578262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.578271 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.680372 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.680421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.680432 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.680450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.680462 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.701960 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:57 crc kubenswrapper[4827]: E0126 09:06:57.702093 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.706427 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:48:23.262294156 +0000 UTC Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.783015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.783064 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.783074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.783097 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.783110 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.885917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.885944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.885951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.885964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.885974 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.988206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.988251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.988264 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.988284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:57 crc kubenswrapper[4827]: I0126 09:06:57.988297 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:57Z","lastTransitionTime":"2026-01-26T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.090103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.090154 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.090165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.090182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.090194 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.192665 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.192701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.192713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.192728 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.192739 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.296261 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.296325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.296341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.296366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.296389 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.399078 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.399132 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.399147 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.399177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.399198 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.502235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.502285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.502298 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.502320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.502333 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.605613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.605669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.605681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.605697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.605707 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.702571 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.702675 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.702605 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:58 crc kubenswrapper[4827]: E0126 09:06:58.702817 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:06:58 crc kubenswrapper[4827]: E0126 09:06:58.702758 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:06:58 crc kubenswrapper[4827]: E0126 09:06:58.702931 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.706690 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:21:21.711753725 +0000 UTC Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.708269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.708310 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.708327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.708345 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.708392 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.794412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:06:58 crc kubenswrapper[4827]: E0126 09:06:58.794556 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:58 crc kubenswrapper[4827]: E0126 09:06:58.794610 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:07:06.794595795 +0000 UTC m=+55.443267604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.810934 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.810979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.810990 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.811006 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.811016 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.914131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.914200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.914220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.914256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:58 crc kubenswrapper[4827]: I0126 09:06:58.914278 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:58Z","lastTransitionTime":"2026-01-26T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.017229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.017433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.017700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.017775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.017829 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.120261 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.120516 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.120663 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.120778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.120863 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.223413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.223467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.223478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.223497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.223509 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.326416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.326467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.326483 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.326508 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.326525 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.429495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.429561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.429576 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.429597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.429613 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.532115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.532160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.532170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.532185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.532194 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.636928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.636998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.637054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.637098 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.637121 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.702277 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:06:59 crc kubenswrapper[4827]: E0126 09:06:59.702455 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.703414 4827 scope.go:117] "RemoveContainer" containerID="a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.707119 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:17:25.400796498 +0000 UTC Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.740514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.740570 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.740595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.740626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.740683 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.843542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.843578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.843587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.843606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.843616 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.946710 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.946778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.946792 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.946876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.946887 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:06:59Z","lastTransitionTime":"2026-01-26T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.972456 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/1.log" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.976069 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57"} Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.976856 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:06:59 crc kubenswrapper[4827]: I0126 09:06:59.992581 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.005800 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.023842 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.035714 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049609 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049653 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049665 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.049726 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.071344 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.088883 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.101902 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.113588 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.133141 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.148448 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.152031 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.152069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.152079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.152124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.152136 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.173193 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.186918 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.197614 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.211964 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.222066 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.255659 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.255693 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.255702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.255718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.255728 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.358608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.358657 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.358668 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.358683 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.358693 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.461136 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.461202 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.461217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.461234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.461247 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.563703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.563747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.563759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.563776 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.563785 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.665977 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.666023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.666034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.666050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.666059 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.702519 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.702592 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:00 crc kubenswrapper[4827]: E0126 09:07:00.702689 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.702522 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:00 crc kubenswrapper[4827]: E0126 09:07:00.702763 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:00 crc kubenswrapper[4827]: E0126 09:07:00.702820 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.707719 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:02:39.446398662 +0000 UTC Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.769220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.769265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.769282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.769305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.769352 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.872425 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.872495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.872518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.872547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.872572 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.976250 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.976312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.976328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.976352 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.976370 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:00Z","lastTransitionTime":"2026-01-26T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.983219 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/2.log" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.984909 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/1.log" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.990513 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" exitCode=1 Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.990582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57"} Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.990679 4827 scope.go:117] "RemoveContainer" containerID="a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548" Jan 26 09:07:00 crc kubenswrapper[4827]: I0126 09:07:00.992072 4827 scope.go:117] "RemoveContainer" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" Jan 26 09:07:00 crc kubenswrapper[4827]: E0126 09:07:00.992411 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.013110 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.029272 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.050703 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.052557 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.065124 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.068015 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.078439 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.078477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.078489 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.078504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.078515 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.080691 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.093789 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.105267 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.116755 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.128081 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.138535 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.152228 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.161455 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.171998 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.180929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.180964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.180974 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.180988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.180998 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.182890 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.194151 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.203945 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.215428 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.225224 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.235355 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.247624 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.257184 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.274294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.283376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.283411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.283437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.283458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.283472 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.289311 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.301965 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.314125 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.324096 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.335673 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.345831 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.358063 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.374593 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.385318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.385351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.385359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.385371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.385379 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.388775 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.398141 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.411014 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.487125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.487158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.487170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.487186 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.487198 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.590179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.590249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.590262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.590280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.590292 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.693058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.693121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.693138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.693161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.693178 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.702921 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:01 crc kubenswrapper[4827]: E0126 09:07:01.703088 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.708544 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:06:34.019776588 +0000 UTC Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.723128 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.743151 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.759365 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.772373 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.784930 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.795698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.795747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.795762 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.795782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.795797 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.803088 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.816897 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.830262 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.838265 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.847750 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.858069 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.869018 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.883922 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.896237 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.897531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.897564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.897574 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.897591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.897605 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.907577 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.917151 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.932632 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7ef8fe6614368017ff797aede6b619d46490848c5b8e90d36bee2b901ee6548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"message\\\":\\\"37633c1ddb0495],SizeBytes:473958144,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717],SizeBytes:463179365,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c],SizeBytes:460774792,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113],SizeBytes:459737917,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09],SizeBytes:457588564,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:crun,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:\\\\u0026NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*false,},},},Features:nil,},}\\\\nI0126 09:06:47.796216 6169 egressqos.go:1009] Finished syncing EgressQoS node crc : 107.597155ms\\\\nI0126 09:06:47.796253 6169 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.994906 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/2.log" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.998924 4827 scope.go:117] "RemoveContainer" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.999206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:01 crc kubenswrapper[4827]: E0126 09:07:01.999209 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.999232 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.999279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.999307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:01 crc kubenswrapper[4827]: I0126 09:07:01.999332 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:01Z","lastTransitionTime":"2026-01-26T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.014378 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.027017 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.037852 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.056488 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.070335 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.087073 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.098268 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.102267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.102302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.102315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.102333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.102345 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.126841 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.143722 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.159605 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.183013 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.195016 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.204495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.204672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.204698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.204726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.204743 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.213962 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.231385 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.251937 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.268069 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.282508 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.307471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.307744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.307874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.307982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.308092 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.411324 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.411385 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.411403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.411425 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.411441 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.429901 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:07:34.429875497 +0000 UTC m=+83.078547346 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.429906 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.430364 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.430571 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.430894 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.430587 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431237 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:34.431216653 +0000 UTC m=+83.079888512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.430692 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431295 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:34.431280545 +0000 UTC m=+83.079952394 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431066 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431397 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431421 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431468 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431517 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431540 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431487 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:34.43146598 +0000 UTC m=+83.080137839 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.431611 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:07:34.431586943 +0000 UTC m=+83.080258802 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.431171 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.514068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.514100 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.514108 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.514122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.514131 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.616898 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.617412 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.617707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.617897 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.618025 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.701859 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.701899 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.702000 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.702053 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.702099 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:02 crc kubenswrapper[4827]: E0126 09:07:02.702457 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.708688 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:02:26.30829026 +0000 UTC Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.720449 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.720518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.720531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.720551 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.720563 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.823096 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.823141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.823153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.823171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.823184 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.925532 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.925592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.925603 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.925617 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:02 crc kubenswrapper[4827]: I0126 09:07:02.925629 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:02Z","lastTransitionTime":"2026-01-26T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.028219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.028260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.028276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.028291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.028300 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.130390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.130430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.130467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.130482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.130490 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.232774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.232803 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.232812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.232826 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.232837 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.335561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.335596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.335604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.335616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.335624 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.437724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.438278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.438364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.438478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.438573 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.540858 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.540886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.540896 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.540915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.540925 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.643486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.643584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.643611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.643696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.643735 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.702492 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:03 crc kubenswrapper[4827]: E0126 09:07:03.702630 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.708748 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:01:38.827800623 +0000 UTC Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.746724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.746779 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.746787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.746799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.746807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.849075 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.849371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.849683 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.850206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.850261 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.952662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.952717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.952734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.952759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:03 crc kubenswrapper[4827]: I0126 09:07:03.952776 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:03Z","lastTransitionTime":"2026-01-26T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.056021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.056074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.056086 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.056103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.056115 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.159015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.159087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.159109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.159138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.159157 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.262076 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.262143 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.262160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.262182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.262198 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.364162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.364204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.364215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.364230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.364241 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.466417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.467371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.467611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.467838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.468026 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.571395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.571445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.571460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.571480 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.571492 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.674234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.674320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.674356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.674394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.674423 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.702890 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.703016 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.703035 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:04 crc kubenswrapper[4827]: E0126 09:07:04.703142 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:04 crc kubenswrapper[4827]: E0126 09:07:04.703373 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:04 crc kubenswrapper[4827]: E0126 09:07:04.703468 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.708948 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:53:52.146899721 +0000 UTC Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.777898 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.777961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.777972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.777996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.778010 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.880792 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.881326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.881493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.881690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.881911 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.984625 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.984690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.984702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.984719 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:04 crc kubenswrapper[4827]: I0126 09:07:04.984732 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:04Z","lastTransitionTime":"2026-01-26T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.087197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.087234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.087243 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.087257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.087266 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.189079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.189123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.189135 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.189150 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.189163 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.291851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.292121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.292195 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.292262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.292319 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.395454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.395756 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.395878 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.395974 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.396064 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.498842 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.498904 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.498924 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.498953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.498974 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.601327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.601376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.601394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.601416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.601432 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.607569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.607635 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.607662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.607677 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.607688 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.619282 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.624103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.624157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.624173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.624196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.624215 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.639217 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.644514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.644739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.644826 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.644945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.645036 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.663121 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.668604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.668847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.668944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.669068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.669163 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.686869 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.691810 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.691870 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.691882 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.691925 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.691943 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.702294 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.702595 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.707230 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:05 crc kubenswrapper[4827]: E0126 09:07:05.707401 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709022 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709076 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709095 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709109 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.709347 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:19:37.884163855 +0000 UTC Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.811279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.811531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.811660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.811804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.811899 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.914867 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.914900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.914908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.914924 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:05 crc kubenswrapper[4827]: I0126 09:07:05.914933 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:05Z","lastTransitionTime":"2026-01-26T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.018297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.018762 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.019028 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.019308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.019581 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.122849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.123350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.123463 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.123582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.123729 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.227049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.227553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.227774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.228294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.228540 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.333031 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.333089 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.333114 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.333142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.333168 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.435579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.435634 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.435727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.435757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.435779 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.539539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.539600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.539609 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.539623 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.539632 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.643058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.643144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.643170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.643204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.643283 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.702907 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.703018 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.702907 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:06 crc kubenswrapper[4827]: E0126 09:07:06.703099 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:06 crc kubenswrapper[4827]: E0126 09:07:06.703224 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:06 crc kubenswrapper[4827]: E0126 09:07:06.703359 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.710101 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:13:01.781466472 +0000 UTC Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.747095 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.747219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.747290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.747322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.747343 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.849775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.849830 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.849849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.849875 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.849892 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.879420 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:06 crc kubenswrapper[4827]: E0126 09:07:06.879626 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:06 crc kubenswrapper[4827]: E0126 09:07:06.879764 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:07:22.879737487 +0000 UTC m=+71.528409336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.952135 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.952179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.952191 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.952206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:06 crc kubenswrapper[4827]: I0126 09:07:06.952217 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:06Z","lastTransitionTime":"2026-01-26T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.054670 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.054708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.054718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.054732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.054741 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.158084 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.158330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.158486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.158589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.158723 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.261674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.261728 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.261743 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.261763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.261778 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.364346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.364612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.364702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.364779 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.364859 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.467222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.467256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.467266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.467279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.467287 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.569620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.569660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.569669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.569682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.569691 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.672386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.672418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.672428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.672440 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.672448 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.702044 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:07 crc kubenswrapper[4827]: E0126 09:07:07.702138 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.711133 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:45:38.03941878 +0000 UTC Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.779447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.779481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.779491 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.779528 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.779540 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.883710 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.883780 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.883801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.883828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.883849 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.986549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.986586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.986596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.986610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:07 crc kubenswrapper[4827]: I0126 09:07:07.986621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:07Z","lastTransitionTime":"2026-01-26T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.089368 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.089597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.089720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.089797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.089867 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.191970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.192230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.192327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.192465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.192536 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.294921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.294964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.294976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.294994 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.295005 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.397267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.397307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.397317 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.397333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.397346 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.499517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.499569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.499585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.499606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.499621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.602280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.602318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.602327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.602341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.602351 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.702146 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.702201 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:08 crc kubenswrapper[4827]: E0126 09:07:08.702373 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.702423 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:08 crc kubenswrapper[4827]: E0126 09:07:08.702746 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:08 crc kubenswrapper[4827]: E0126 09:07:08.703416 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.704395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.704501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.704520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.704542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.704559 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.711523 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:26:49.390674061 +0000 UTC Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.807060 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.807101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.807114 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.807130 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.807163 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.910263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.910327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.910350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.910378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:08 crc kubenswrapper[4827]: I0126 09:07:08.910404 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:08Z","lastTransitionTime":"2026-01-26T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.012939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.013005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.013021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.013047 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.013063 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.115194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.115233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.115263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.115278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.115288 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.217953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.218027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.218044 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.218069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.218086 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.320859 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.320956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.320980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.321009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.321031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.422983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.423035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.423052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.423075 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.423093 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.525492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.525549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.525562 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.525583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.525598 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.628623 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.628682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.628690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.628703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.628712 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.702664 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:09 crc kubenswrapper[4827]: E0126 09:07:09.702797 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.712019 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:24:22.335865456 +0000 UTC Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.730546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.730599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.730610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.730626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.730682 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.833035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.833067 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.833074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.833087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.833095 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.935557 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.935595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.935607 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.935621 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:09 crc kubenswrapper[4827]: I0126 09:07:09.935632 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:09Z","lastTransitionTime":"2026-01-26T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.038084 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.038139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.038155 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.038176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.038191 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.140284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.140484 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.140564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.140707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.140807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.243437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.243472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.243482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.243498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.243507 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.345414 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.345460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.345473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.345490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.345504 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.448319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.448363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.448380 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.448400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.448413 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.550580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.550620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.550629 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.550695 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.550705 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.653684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.653733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.653744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.653762 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.653774 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.702274 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.702334 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:10 crc kubenswrapper[4827]: E0126 09:07:10.702421 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.702290 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:10 crc kubenswrapper[4827]: E0126 09:07:10.702559 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:10 crc kubenswrapper[4827]: E0126 09:07:10.702629 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.712600 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:37:38.423769263 +0000 UTC Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.755322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.755354 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.755362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.755398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.755407 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.858115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.858155 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.858167 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.858185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.858198 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.961170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.961237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.961262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.961286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:10 crc kubenswrapper[4827]: I0126 09:07:10.961304 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:10Z","lastTransitionTime":"2026-01-26T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.063323 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.063363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.063375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.063390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.063401 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.165579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.165613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.165620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.165677 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.165686 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.269132 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.269187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.269200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.269217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.269228 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.371616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.371676 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.371705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.371724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.371738 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.474465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.474541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.474550 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.474564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.474573 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.576769 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.576977 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.577041 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.577124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.577185 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.679255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.679284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.679301 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.679315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.679325 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.703285 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:11 crc kubenswrapper[4827]: E0126 09:07:11.703469 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.713242 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:56:01.14207932 +0000 UTC Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.719954 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.731870 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.745915 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.759101 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.770947 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782095 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782144 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.782047 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.795278 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.808716 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.818004 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.834900 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.848965 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.863161 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.875919 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.884304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.884341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.884350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.884367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.884378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.885882 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.896923 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.909627 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.922006 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.985917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.985958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.985971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.985986 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:11 crc kubenswrapper[4827]: I0126 09:07:11.985998 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:11Z","lastTransitionTime":"2026-01-26T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.089475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.089509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.089521 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.089537 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.089552 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.197969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.198015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.198030 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.198050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.198064 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.301771 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.301813 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.301822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.301839 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.301848 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.403956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.403999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.404011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.404028 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.404040 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.506045 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.506090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.506101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.506117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.506127 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.609070 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.609111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.609122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.609138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.609148 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.702565 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:12 crc kubenswrapper[4827]: E0126 09:07:12.702702 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.702995 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:12 crc kubenswrapper[4827]: E0126 09:07:12.703041 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.703077 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:12 crc kubenswrapper[4827]: E0126 09:07:12.703115 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.711672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.711698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.711706 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.711718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.711727 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.714333 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:39:26.533627746 +0000 UTC Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.813689 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.813744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.813757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.813779 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.813791 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.916254 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.916306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.916319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.916336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:12 crc kubenswrapper[4827]: I0126 09:07:12.916349 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:12Z","lastTransitionTime":"2026-01-26T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.018440 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.018482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.018493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.018510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.018520 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.120387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.120467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.120477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.120492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.120502 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.223124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.223177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.223191 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.223211 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.223226 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.325322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.325367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.325384 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.325407 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.325422 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.428242 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.428295 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.428308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.428327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.428344 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.529956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.529992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.530005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.530020 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.530030 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.632852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.632894 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.632904 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.632921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.632932 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.702892 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:13 crc kubenswrapper[4827]: E0126 09:07:13.703182 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.703876 4827 scope.go:117] "RemoveContainer" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" Jan 26 09:07:13 crc kubenswrapper[4827]: E0126 09:07:13.704144 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.714469 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:51:19.599909057 +0000 UTC Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.736218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.736306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.736328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.736358 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.736381 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.841793 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.841845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.841857 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.841878 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.841890 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.944059 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.944093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.944102 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.944115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:13 crc kubenswrapper[4827]: I0126 09:07:13.944124 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:13Z","lastTransitionTime":"2026-01-26T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.052706 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.052742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.052753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.052767 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.052781 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.155932 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.155984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.155998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.156018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.156032 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.258724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.258753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.258761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.258773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.258783 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.361581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.361616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.361624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.361640 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.361650 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.464767 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.464851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.464864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.464882 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.464895 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.567832 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.567900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.567915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.567935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.567950 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.671058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.671116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.671133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.671158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.671176 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.702019 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.702060 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.702078 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:14 crc kubenswrapper[4827]: E0126 09:07:14.702219 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:14 crc kubenswrapper[4827]: E0126 09:07:14.702380 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:14 crc kubenswrapper[4827]: E0126 09:07:14.702509 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.715093 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:28:03.960501511 +0000 UTC Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.774609 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.774671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.774689 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.774711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.774729 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.878162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.878207 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.878219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.878238 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.878252 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.980788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.981049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.981112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.981131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:14 crc kubenswrapper[4827]: I0126 09:07:14.981142 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:14Z","lastTransitionTime":"2026-01-26T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.083281 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.083309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.083348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.083365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.083378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.185722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.185788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.185804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.185832 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.185848 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.288309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.288344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.288356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.288371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.288382 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.391321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.391379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.391389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.391405 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.391414 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.494264 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.494297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.494305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.494335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.494344 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.596850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.596892 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.596902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.596918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.596928 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.698838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.698873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.698884 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.698900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.698911 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.702885 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.703003 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.715631 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:20:47.271725447 +0000 UTC Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.801117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.801145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.801157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.801171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.801183 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.864843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.865072 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.865085 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.865099 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.865110 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.876833 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.880346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.880401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.880414 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.880431 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.880442 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.894116 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.898187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.898236 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.898251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.898269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.898280 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.915424 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.919553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.919582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.919594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.919610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.919621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.930239 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.934002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.934045 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.934082 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.934100 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.934115 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.946843 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:15 crc kubenswrapper[4827]: E0126 09:07:15.947062 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.948830 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.948899 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.948917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.949339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:15 crc kubenswrapper[4827]: I0126 09:07:15.949399 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:15Z","lastTransitionTime":"2026-01-26T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.051747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.051783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.051794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.051809 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.051820 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.154053 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.154091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.154103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.154117 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.154128 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.256434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.256460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.256468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.256480 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.256489 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.359336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.359374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.359385 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.359401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.359413 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.462173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.462224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.462235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.462247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.462257 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.564840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.564887 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.564895 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.564912 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.564923 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.668655 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.668694 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.668702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.668716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.668729 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.702343 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.702371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.702371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:16 crc kubenswrapper[4827]: E0126 09:07:16.702490 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:16 crc kubenswrapper[4827]: E0126 09:07:16.702573 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:16 crc kubenswrapper[4827]: E0126 09:07:16.702701 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.716503 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:35:21.280357462 +0000 UTC Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.771501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.771531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.771539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.771552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.771561 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.873601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.873652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.873664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.873684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.873697 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.976158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.976213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.976229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.976249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:16 crc kubenswrapper[4827]: I0126 09:07:16.976264 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:16Z","lastTransitionTime":"2026-01-26T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.078331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.078358 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.078367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.078381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.078390 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.181204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.181243 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.181256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.181268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.181277 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.283485 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.283524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.283532 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.283545 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.283553 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.385039 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.385078 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.385089 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.385103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.385114 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.487392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.487433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.487465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.487486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.487530 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.608853 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.608902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.608918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.608934 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.608945 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.702757 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:17 crc kubenswrapper[4827]: E0126 09:07:17.703009 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.711111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.711160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.711173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.711194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.711206 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.716053 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.717195 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:52:06.341929823 +0000 UTC Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.813812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.813856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.813869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.813886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.813898 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.916560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.916616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.916630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.916666 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:17 crc kubenswrapper[4827]: I0126 09:07:17.916679 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:17Z","lastTransitionTime":"2026-01-26T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.023603 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.023633 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.023646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.023699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.023711 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.125918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.125980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.125992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.126009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.126020 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.228541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.228588 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.228602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.228620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.228632 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.331325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.331366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.331377 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.331393 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.331403 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.433737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.433774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.433784 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.433799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.433808 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.535420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.535460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.535470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.535485 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.535495 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.638444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.638496 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.638510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.638529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.638542 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.702741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.702792 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.702770 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:18 crc kubenswrapper[4827]: E0126 09:07:18.702898 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:18 crc kubenswrapper[4827]: E0126 09:07:18.703015 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:18 crc kubenswrapper[4827]: E0126 09:07:18.703105 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.717457 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:11:40.76190424 +0000 UTC Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.740601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.740670 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.740682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.740698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.740711 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.879560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.879592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.879601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.879620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.879632 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.982239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.982281 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.982327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.982343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:18 crc kubenswrapper[4827]: I0126 09:07:18.982352 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:18Z","lastTransitionTime":"2026-01-26T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.085048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.085092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.085104 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.085121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.085136 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.187698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.187745 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.187758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.187773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.187783 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.290510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.290550 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.290562 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.290580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.290592 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.393028 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.393064 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.393074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.393088 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.393101 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.495542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.495599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.495616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.495671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.495692 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.597978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.598004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.598013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.598023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.598032 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.700247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.700284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.700293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.700307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.700316 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.704883 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:19 crc kubenswrapper[4827]: E0126 09:07:19.705001 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.717680 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:28:02.213268472 +0000 UTC Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.802575 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.802697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.802716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.802770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.802788 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.905785 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.905819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.905828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.905843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:19 crc kubenswrapper[4827]: I0126 09:07:19.905855 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:19Z","lastTransitionTime":"2026-01-26T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.008174 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.008234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.008245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.008285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.008304 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.110628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.110703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.110715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.110750 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.110761 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.212942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.212971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.212980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.212993 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.213002 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.315180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.315209 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.315217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.315229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.315238 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.416894 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.416928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.416940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.416953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.416961 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.518904 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.518932 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.518942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.518956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.518967 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.621339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.621370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.621381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.621394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.621406 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.702191 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.702209 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.702191 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:20 crc kubenswrapper[4827]: E0126 09:07:20.702298 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:20 crc kubenswrapper[4827]: E0126 09:07:20.702391 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:20 crc kubenswrapper[4827]: E0126 09:07:20.702455 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.717771 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:41:51.174078677 +0000 UTC Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.723559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.723590 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.723599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.723611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.723621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.826081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.826125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.826136 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.826152 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.826163 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.928568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.928607 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.928622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.928659 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:20 crc kubenswrapper[4827]: I0126 09:07:20.928670 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:20Z","lastTransitionTime":"2026-01-26T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.030628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.030706 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.030721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.030761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.030776 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.133036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.133079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.133090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.133105 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.133116 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.235206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.235244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.235252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.235265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.235274 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.337672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.337714 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.337724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.337742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.337755 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.440507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.440566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.440577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.440591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.440615 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.543208 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.543250 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.543258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.543273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.543285 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.645711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.645783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.645804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.645826 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.645843 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.702475 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:21 crc kubenswrapper[4827]: E0126 09:07:21.702624 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.719906 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:03:47.428512479 +0000 UTC Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.722154 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.734272 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.746619 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.747887 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.747921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.747929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.747944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.747954 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.761080 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.776298 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.795240 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.815762 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.832757 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.842993 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.849825 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.849851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.849861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.849876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.849886 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.852143 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.864386 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.876018 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.889361 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.905939 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.919047 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.928819 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.937550 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.953883 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.953948 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.953962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.953978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.953990 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:21Z","lastTransitionTime":"2026-01-26T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:21 crc kubenswrapper[4827]: I0126 09:07:21.955815 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.055382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.055435 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.055446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.055459 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.055468 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.157718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.157754 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.157765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.157781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.157791 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.260876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.260921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.260930 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.260947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.260958 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.363512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.363554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.363568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.363585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.363593 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.465792 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.465830 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.465840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.465856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.465868 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.568722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.568788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.568798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.568813 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.568822 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.671486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.671521 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.671529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.671543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.671552 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.701997 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.702007 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:22 crc kubenswrapper[4827]: E0126 09:07:22.702101 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.702164 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:22 crc kubenswrapper[4827]: E0126 09:07:22.702241 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:22 crc kubenswrapper[4827]: E0126 09:07:22.702280 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.721057 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:32:29.566222889 +0000 UTC Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.774255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.774304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.774316 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.774336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.774349 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.876084 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.876144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.876153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.876168 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.876177 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.958670 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:22 crc kubenswrapper[4827]: E0126 09:07:22.958837 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:22 crc kubenswrapper[4827]: E0126 09:07:22.958930 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:07:54.958911196 +0000 UTC m=+103.607583015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.978350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.978404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.978416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.978433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:22 crc kubenswrapper[4827]: I0126 09:07:22.978443 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:22Z","lastTransitionTime":"2026-01-26T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.081024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.081048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.081055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.081066 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.081081 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.183781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.183827 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.183838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.183855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.183867 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.286850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.286899 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.286908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.286927 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.286936 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.390599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.390703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.390727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.390755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.390775 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.494166 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.494204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.494215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.494230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.494240 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.596320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.596358 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.596366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.596380 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.596390 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.698702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.698756 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.698773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.698797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.698816 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.702848 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:23 crc kubenswrapper[4827]: E0126 09:07:23.702986 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.722075 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:18:14.549903798 +0000 UTC Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.801025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.801075 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.801096 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.801120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.801135 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.903112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.903140 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.903148 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.903160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:23 crc kubenswrapper[4827]: I0126 09:07:23.903168 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:23Z","lastTransitionTime":"2026-01-26T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.008892 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.008939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.008949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.008963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.008974 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.066520 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/0.log" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.066599 4827 generic.go:334] "Generic (PLEG): container finished" podID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" containerID="87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f" exitCode=1 Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.066762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerDied","Data":"87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.067369 4827 scope.go:117] "RemoveContainer" containerID="87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.081175 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.099321 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.111737 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.112342 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.112371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.112382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.112397 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.112407 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.131555 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.144408 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.156928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.172044 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.186400 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.201112 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.213931 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.213979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.213991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.214005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.214015 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.214952 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.224440 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.234830 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.245567 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.256800 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.266681 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.276060 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.288492 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.301107 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.317125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.317173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.317185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.317201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.317212 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.419344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.419378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.419389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.419405 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.419416 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.522018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.522079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.522091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.522108 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.522120 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.624680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.624741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.624758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.624784 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.624800 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.702246 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.702278 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.702266 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:24 crc kubenswrapper[4827]: E0126 09:07:24.702376 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:24 crc kubenswrapper[4827]: E0126 09:07:24.702425 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:24 crc kubenswrapper[4827]: E0126 09:07:24.702522 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.722180 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:52:36.888836205 +0000 UTC Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.727008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.727040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.727052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.727069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.727080 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.829561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.829597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.829605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.829618 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.829627 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.931780 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.931817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.931828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.931845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:24 crc kubenswrapper[4827]: I0126 09:07:24.931855 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:24Z","lastTransitionTime":"2026-01-26T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.034509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.034547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.034560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.034577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.034589 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.071147 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/0.log" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.071218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerStarted","Data":"b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.088682 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.103545 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.113763 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.123710 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.136921 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.137285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.137312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.137323 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.137338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.137347 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.150104 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.167164 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.178006 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.190187 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.204219 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.214348 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.236428 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.240090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.240141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.240157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.240179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.240197 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.254346 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.267597 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.279486 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.288794 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.301272 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.312866 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.342600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.342656 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.342667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.342681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.342691 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.445420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.445456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.445466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.445481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.445491 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.547821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.547848 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.547858 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.547870 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.547879 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.652328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.652577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.652717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.652796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.652854 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.704472 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:25 crc kubenswrapper[4827]: E0126 09:07:25.704575 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.723018 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:41:44.690118569 +0000 UTC Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.755727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.755996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.756086 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.756192 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.756275 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.859173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.859231 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.859241 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.859255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.859266 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.961548 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.961581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.961591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.961605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:25 crc kubenswrapper[4827]: I0126 09:07:25.961617 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:25Z","lastTransitionTime":"2026-01-26T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.063370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.063408 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.063418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.063433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.063444 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.065822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.065848 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.065857 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.065870 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.065879 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.077926 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:26Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.082514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.082548 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.082560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.082576 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.082587 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.098717 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:26Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.102688 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.102731 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.102743 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.102759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.102771 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.116692 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:26Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.119949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.119994 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.120004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.120020 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.120032 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.132779 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:26Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.135927 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.135961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.135971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.135984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.135992 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.145595 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8bb801-e455-4976-8dea-8e9cfca6b87a\\\",\\\"systemUUID\\\":\\\"0c72dade-aced-4c2f-bbff-04b65bb274fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:26Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.145744 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.165828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.165859 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.165871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.165886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.165898 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.281210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.281832 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.282822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.282942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.283044 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.385063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.385097 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.385105 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.385120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.385130 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.487683 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.487737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.487751 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.487768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.487780 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.590815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.590868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.590880 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.590896 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.590908 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.692797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.692851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.692863 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.692881 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.692893 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.702075 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.702118 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.702139 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.702203 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.702294 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:26 crc kubenswrapper[4827]: E0126 09:07:26.702405 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.724477 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:22:48.525977822 +0000 UTC Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.795827 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.795854 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.795863 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.795876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.795884 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.898460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.898494 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.898502 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.898514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:26 crc kubenswrapper[4827]: I0126 09:07:26.898523 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:26Z","lastTransitionTime":"2026-01-26T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.000813 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.000863 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.000876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.000896 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.000908 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.103330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.103372 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.103382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.103402 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.103414 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.206184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.206229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.206239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.206253 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.206266 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.307915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.307956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.307967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.307983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.307995 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.410775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.410814 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.410823 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.410840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.410852 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.513044 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.513354 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.513466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.513582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.513720 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.615651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.615717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.615729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.615746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.615768 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.702628 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:27 crc kubenswrapper[4827]: E0126 09:07:27.702791 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.717783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.718020 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.718090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.718213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.718294 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.725323 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:22:59.928208187 +0000 UTC Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.821175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.821214 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.821227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.821242 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.821257 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.923690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.923753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.923764 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.923781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:27 crc kubenswrapper[4827]: I0126 09:07:27.923792 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:27Z","lastTransitionTime":"2026-01-26T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.026815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.026855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.026864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.026876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.026885 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.128877 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.129087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.129174 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.129296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.129378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.231365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.231630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.231726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.231797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.231865 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.334131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.334171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.334182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.334199 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.334209 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.436842 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.436883 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.436892 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.436908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.436917 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.539300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.539378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.539400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.539423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.539439 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.641978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.642016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.642025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.642038 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.642048 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.702385 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:28 crc kubenswrapper[4827]: E0126 09:07:28.702523 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.702540 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.702852 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:28 crc kubenswrapper[4827]: E0126 09:07:28.702956 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:28 crc kubenswrapper[4827]: E0126 09:07:28.703138 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.703345 4827 scope.go:117] "RemoveContainer" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.725572 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:35:28.307580515 +0000 UTC Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.744546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.744588 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.744599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.744617 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.744628 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.847287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.847319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.847328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.847343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.847368 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.951172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.951218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.951226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.951240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:28 crc kubenswrapper[4827]: I0126 09:07:28.951249 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:28Z","lastTransitionTime":"2026-01-26T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.053807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.053846 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.053856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.053872 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.053882 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.083884 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/2.log" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.086344 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.086813 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.106929 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.118263 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.130946 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.142018 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.152092 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.155816 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.155851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.155860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.155881 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.155892 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.166345 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.179629 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.191936 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.205294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.215206 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.225507 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.240441 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.253777 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.257752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.257782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.257791 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.257822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.257837 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.276139 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.289917 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.302428 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.313411 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.322919 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.360753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.360806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.360815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.360829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.360841 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.462889 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.462952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.462963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.462978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.462991 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.565070 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.565112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.565124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.565141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.565154 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.667256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.667300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.667314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.667331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.667343 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.702971 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:29 crc kubenswrapper[4827]: E0126 09:07:29.703134 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.726736 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:59:07.418669121 +0000 UTC Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.769879 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.769952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.769976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.770004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.770028 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.872196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.872267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.872284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.872303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.872317 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.974886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.974935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.974943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.974960 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:29 crc kubenswrapper[4827]: I0126 09:07:29.974969 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:29Z","lastTransitionTime":"2026-01-26T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.077285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.077325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.077336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.077352 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.077364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.095086 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/3.log" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.095734 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/2.log" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.097786 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" exitCode=1 Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.097820 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.097849 4827 scope.go:117] "RemoveContainer" containerID="b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.098395 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:07:30 crc kubenswrapper[4827]: E0126 09:07:30.098520 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.113989 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.132427 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.145755 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.159948 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.175748 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.179827 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.179898 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.179922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.179951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.179976 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.189127 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.199160 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.218356 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6547329ccdead5f00a5dca5c7d2697a6085963f71b363121ad2eb7f23b8de57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:00Z\\\",\\\"message\\\":\\\"_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 09:07:00.527818 6376 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 09:07:00.527864 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:29Z\\\",\\\"message\\\":\\\"tor *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:07:29.383851 6764 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.384040 6764 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.388858 6764 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 09:07:29.388934 6764 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 09:07:29.388985 6764 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 09:07:29.389029 6764 factory.go:656] Stopping watch factory\\\\nI0126 09:07:29.389062 6764 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 09:07:29.418319 6764 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 09:07:29.418358 6764 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 09:07:29.418416 6764 ovnkube.go:599] Stopped ovnkube\\\\nI0126 09:07:29.418451 6764 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 09:07:29.418552 6764 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.232395 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.251485 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.263398 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.276318 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.282987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.283024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.283035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.283051 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.283063 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.289902 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.302087 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.313561 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.327023 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.336938 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.351679 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.384964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.384994 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.385003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.385014 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.385023 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.487324 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.487362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.487374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.487391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.487402 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.589510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.589542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.589551 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.589565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.589575 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.691711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.691752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.691765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.691781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.691792 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.702267 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.702296 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:30 crc kubenswrapper[4827]: E0126 09:07:30.702395 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.702530 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:30 crc kubenswrapper[4827]: E0126 09:07:30.702621 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:30 crc kubenswrapper[4827]: E0126 09:07:30.702762 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.727822 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:23:11.664408374 +0000 UTC Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.796066 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.796292 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.796589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.796680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.796763 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.899262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.899481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.899549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.899667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:30 crc kubenswrapper[4827]: I0126 09:07:30.899730 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:30Z","lastTransitionTime":"2026-01-26T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.002480 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.002515 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.002525 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.002538 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.002547 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.102538 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/3.log" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.104299 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.104360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.104444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.104468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.104480 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.106467 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:07:31 crc kubenswrapper[4827]: E0126 09:07:31.106624 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.120482 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.133020 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.145155 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.154224 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.166228 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.177013 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.189792 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.206718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.206748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.206756 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.206768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.206778 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.208896 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.219340 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.230583 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.244857 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.257355 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.266907 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.283938 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:29Z\\\",\\\"message\\\":\\\"tor *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:07:29.383851 6764 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.384040 6764 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.388858 6764 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 09:07:29.388934 6764 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 09:07:29.388985 6764 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 09:07:29.389029 6764 factory.go:656] Stopping watch factory\\\\nI0126 09:07:29.389062 6764 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 09:07:29.418319 6764 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 09:07:29.418358 6764 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 09:07:29.418416 6764 ovnkube.go:599] Stopped ovnkube\\\\nI0126 09:07:29.418451 6764 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 09:07:29.418552 6764 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:07:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.297092 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309408 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309430 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.309416 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.322494 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.332282 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.411415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.411445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.411454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.411467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.411476 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.513362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.513407 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.513418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.513433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.513444 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.615959 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.616183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.616244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.616302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.616402 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.702374 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:31 crc kubenswrapper[4827]: E0126 09:07:31.702515 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.714054 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec123c02-3b1b-48d2-b6aa-9d7b4831878f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f154de2dc6bd8a782fd1ae73427517f12ca1f1c99faae0023d24817c90b3c04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://658827b9702d08f9687a85b6c23917b026e39acf37837cf47aafcfd63c6d4263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2404bddc6b8e567335638698c8407257ba576ab67e7490b5f66bd92d2e7fae6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.718422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.718458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.718474 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.718495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.718510 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.728474 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.728666 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:56:11.066776033 +0000 UTC Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.738036 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6274e4b38a404612cdf9bdfb8394ff0221101cd59b98a9aeafe9ed3a75e1c718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.747418 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qmzjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b871a59f-4896-4609-806e-7255dd7708b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d176c8052a05afa17c1f226a6efef87113e4328694766becf8fd12a048f0a75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6n4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qmzjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.762802 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ba16376-c20a-411b-b45a-d7e718fbbac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:29Z\\\",\\\"message\\\":\\\"tor *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 09:07:29.383851 6764 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.384040 6764 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 09:07:29.388858 6764 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 09:07:29.388934 6764 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 09:07:29.388985 6764 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 09:07:29.389029 6764 factory.go:656] Stopping watch factory\\\\nI0126 09:07:29.389062 6764 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 09:07:29.418319 6764 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 09:07:29.418358 6764 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 09:07:29.418416 6764 ovnkube.go:599] Stopped ovnkube\\\\nI0126 09:07:29.418451 6764 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 09:07:29.418552 6764 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:07:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gss4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9xkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.777962 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 09:06:30.316694 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 09:06:30.316841 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 09:06:30.318030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069362019/tls.crt::/tmp/serving-cert-1069362019/tls.key\\\\\\\"\\\\nI0126 09:06:30.682511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 09:06:30.684833 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 09:06:30.684856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 09:06:30.684965 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 09:06:30.684980 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 09:06:30.693898 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 09:06:30.693927 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693935 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 09:06:30.693940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 09:06:30.693945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 09:06:30.693949 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 09:06:30.693953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 09:06:30.694199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 09:06:30.696595 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.789101 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.801307 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v7qpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e83a7bed-4909-4830-89e5-13c9a0bfcaf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T09:07:23Z\\\",\\\"message\\\":\\\"2026-01-26T09:06:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b\\\\n2026-01-26T09:06:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a4c5221-2528-4ecd-ac97-efc006e37f6b to /host/opt/cni/bin/\\\\n2026-01-26T09:06:38Z [verbose] multus-daemon started\\\\n2026-01-26T09:06:38Z [verbose] Readiness Indicator file check\\\\n2026-01-26T09:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v7qpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.811255 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k927z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng82w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k927z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.820979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.821108 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.821214 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.821316 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.821396 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.823224 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a0ad004c2885dad7b3583a68d2a1dd6850ff56d5cd20bfa13329e61eb3efa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.833710 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7f460a0239e1814e7c6960270e2917fe2c7605bee39ee40bab619c372ab43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e401a71020ba4b0afc1ee342de28267fdd0fa5a758845f46f80e4c5bb2c7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.846464 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e37ec5-8c72-432d-9809-ac670c707671\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe5e2a448e038d5b5d54671e929cd7e04ba4bac293f1c7ac593bf85692a0434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bf97954ba8c0f61a5fd8e83ac8d9a4b191ecdd6c84bfceff19d83de0088c43f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fa9ad7294868ac9563f5cbd6c4f6a7b2c2c8f188add6a79e9a95e9db401825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518d78e33d5a54599b6ae8467b118da16672a8fd92f6623366beca1da94e6f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e96d2557098968345d3c0a31c4f5d47b4ca03ad1dfc02a165d21a78f86ef32f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afb6ac62821235d2cd2ffe593dedf7b9dbe83f0989cfdb60cbbd5711410304d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed10453031717fcc9abc8a0b357c1dfa021f2a2c89bba29c5b638a0be873b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krbhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cbqrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.856568 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qn5kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4d1d479-6214-447e-95c4-b563700234d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://650445a4b41f5bbf6a420b918daadca37f2d956f684dd77b4eb438fb2b99129b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg59w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qn5kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.866751 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f9ee397-1413-403b-9884-232263b4ebe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c01daa7c176d6f01f483b5dfc72b2cb6a33473bc93925b7435d0401c4b07414c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402beed65fd7017ea2796184cff6af38c7cb32da02de87284cfb0306bd80225a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tjb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8srzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.875812 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e4595d3-977a-466e-a0cb-e85c6503cea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c35922071157881fb61c809652ca638d0701f1237239bb5098e3dadd541bb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb49c84fb9e6db63189a29d1e657b96445f35a7f905567b60f750964a974706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.886984 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb5c7fe-4b8c-446b-905d-73fd6b288057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c57c9e6f4191c4730fa1857ea42e845e1e1c4d7e1c1f278c1781481fdefd0fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4130cd61737cf99aa4a85deefbee4cd8629b8d180f22476f6f3ac29e616b817f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86849347fe78755e084ea65e6367fb5fca9bce5053edd1bd1aa8b8b6114e1f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9801fc3f4dea31edcaf07e08a67dda0f857398fafe4a18b8ae802b651e6e4cb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:06:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.897825 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.909761 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef39dc20-499c-4665-9555-481361ffe06d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T09:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3de679615049bbde28d1440221718155b6110d486332761d247f8ca74a721ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T09:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rzv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T09:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.923536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.923600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.923625 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.923707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:31 crc kubenswrapper[4827]: I0126 09:07:31.923732 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:31Z","lastTransitionTime":"2026-01-26T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.026979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.027037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.027055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.027081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.027129 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.129053 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.129285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.129418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.129560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.129673 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.234073 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.234139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.234159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.234187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.234228 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.336898 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.336970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.336993 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.337022 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.337044 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.439664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.439715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.439729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.439752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.439767 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.542928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.543212 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.543305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.543403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.543494 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.645985 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.646041 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.646057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.646078 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.646093 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.702284 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.702457 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.702884 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:32 crc kubenswrapper[4827]: E0126 09:07:32.703148 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:32 crc kubenswrapper[4827]: E0126 09:07:32.703315 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:32 crc kubenswrapper[4827]: E0126 09:07:32.703465 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.722586 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.729354 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:15:59.932474055 +0000 UTC Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.748277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.748337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.748349 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.748366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.748378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.851181 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.851217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.851226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.851253 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.851262 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.953801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.953835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.953846 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.953862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:32 crc kubenswrapper[4827]: I0126 09:07:32.953873 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:32Z","lastTransitionTime":"2026-01-26T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.056401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.056430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.056437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.056451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.056481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.158694 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.158737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.158748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.158765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.158778 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.260971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.261008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.261019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.261043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.261069 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.362939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.362966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.362977 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.362989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.362997 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.465709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.465757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.465773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.465807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.465840 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.572923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.572968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.572978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.572998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.573010 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.674854 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.674889 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.674900 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.674915 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.674925 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.702283 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:33 crc kubenswrapper[4827]: E0126 09:07:33.702736 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.730511 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:52:28.824783997 +0000 UTC Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.776887 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.776958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.776969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.776982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.776990 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.879347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.879394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.879404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.879419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.879465 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.982623 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.982685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.982701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.982721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:33 crc kubenswrapper[4827]: I0126 09:07:33.982735 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:33Z","lastTransitionTime":"2026-01-26T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.085204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.085255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.085268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.085286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.085298 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.188738 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.188780 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.188794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.188817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.188833 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.291613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.291714 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.291735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.291755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.291765 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.394003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.394042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.394050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.394065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.394075 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.467906 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.468095 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.468154 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468189 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468231 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.468200359 +0000 UTC m=+147.116872178 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.468279 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468300 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.468266441 +0000 UTC m=+147.116938300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.468348 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468361 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468410 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468433 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468477 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468504 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.468479616 +0000 UTC m=+147.117151485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468539 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.468521767 +0000 UTC m=+147.117193626 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468552 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468564 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468575 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.468602 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.468593049 +0000 UTC m=+147.117264988 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.495933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.495968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.495980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.495998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.496009 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.599395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.599438 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.599461 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.599483 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.599497 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.701722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.701756 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.701766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.701781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.701792 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.702260 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.702312 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.702360 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.702424 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.702602 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:34 crc kubenswrapper[4827]: E0126 09:07:34.702756 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.731688 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:22:44.312364731 +0000 UTC Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.804886 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.804926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.804937 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.804954 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.804996 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.907549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.907591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.907600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.907615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:34 crc kubenswrapper[4827]: I0126 09:07:34.907624 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:34Z","lastTransitionTime":"2026-01-26T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.010210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.010284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.010296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.010311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.010323 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.112745 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.112797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.112805 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.112821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.112831 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.215594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.215630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.215653 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.215668 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.215679 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.317593 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.317663 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.317675 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.317692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.317726 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.420222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.420278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.420289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.420304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.420315 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.522929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.522963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.522972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.522984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.522995 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.625473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.625809 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.625828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.625887 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.625905 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.702926 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:35 crc kubenswrapper[4827]: E0126 09:07:35.703063 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.728188 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.728233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.728246 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.728262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.728273 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.732657 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:22:48.106767913 +0000 UTC Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.832084 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.832141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.832160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.832187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.832209 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.934941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.935026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.935036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.935053 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:35 crc kubenswrapper[4827]: I0126 09:07:35.935063 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:35Z","lastTransitionTime":"2026-01-26T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.037846 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.037884 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.037895 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.037911 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.037921 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.140124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.140182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.140199 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.140221 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.140238 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.242659 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.242955 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.243050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.243123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.243183 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.345952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.346008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.346026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.346050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.346067 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.347416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.347447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.347455 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.347469 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.347479 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.702240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.702305 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.702262 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:36 crc kubenswrapper[4827]: E0126 09:07:36.702438 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:36 crc kubenswrapper[4827]: E0126 09:07:36.702363 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:36 crc kubenswrapper[4827]: E0126 09:07:36.702523 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.733720 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:39:49.257878078 +0000 UTC Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.765871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.765903 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.765913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.765928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.765938 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T09:07:36Z","lastTransitionTime":"2026-01-26T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.794762 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h"] Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.795559 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.797746 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.797974 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.798701 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.801275 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.833478 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cbqrj" podStartSLOduration=60.833456876 podStartE2EDuration="1m0.833456876s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.833371994 +0000 UTC m=+85.482043813" watchObservedRunningTime="2026-01-26 09:07:36.833456876 +0000 UTC m=+85.482128695" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.859455 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qn5kf" podStartSLOduration=60.859431897 podStartE2EDuration="1m0.859431897s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.84732657 +0000 UTC m=+85.495998389" watchObservedRunningTime="2026-01-26 09:07:36.859431897 +0000 UTC m=+85.508103716" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.886455 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.886435955 podStartE2EDuration="4.886435955s" podCreationTimestamp="2026-01-26 09:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.886357603 +0000 UTC m=+85.535029422" watchObservedRunningTime="2026-01-26 09:07:36.886435955 +0000 UTC m=+85.535107774" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.886912 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8srzr" podStartSLOduration=59.886907518 podStartE2EDuration="59.886907518s" podCreationTimestamp="2026-01-26 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.859605352 +0000 UTC m=+85.508277171" watchObservedRunningTime="2026-01-26 09:07:36.886907518 +0000 UTC m=+85.535579337" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.895377 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af860f12-2015-4dc3-9511-ab0bf3698c65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.895416 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.895439 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af860f12-2015-4dc3-9511-ab0bf3698c65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.895466 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af860f12-2015-4dc3-9511-ab0bf3698c65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.895482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.937049 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podStartSLOduration=60.937031583 podStartE2EDuration="1m0.937031583s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.924954606 +0000 UTC m=+85.573626425" watchObservedRunningTime="2026-01-26 09:07:36.937031583 +0000 UTC m=+85.585703402" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.953392 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.953373861 podStartE2EDuration="19.953373861s" podCreationTimestamp="2026-01-26 09:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.937534526 +0000 UTC m=+85.586206345" watchObservedRunningTime="2026-01-26 09:07:36.953373861 +0000 UTC m=+85.602045680" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.965674 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.965662394 podStartE2EDuration="35.965662394s" podCreationTimestamp="2026-01-26 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.953754882 +0000 UTC m=+85.602426701" watchObservedRunningTime="2026-01-26 09:07:36.965662394 +0000 UTC m=+85.614334213" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.978173 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qmzjr" podStartSLOduration=60.978154862 podStartE2EDuration="1m0.978154862s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:36.977603257 +0000 UTC m=+85.626275076" watchObservedRunningTime="2026-01-26 09:07:36.978154862 +0000 UTC m=+85.626826681" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.995993 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996031 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af860f12-2015-4dc3-9511-ab0bf3698c65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996053 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af860f12-2015-4dc3-9511-ab0bf3698c65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af860f12-2015-4dc3-9511-ab0bf3698c65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996153 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996165 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af860f12-2015-4dc3-9511-ab0bf3698c65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:36 crc kubenswrapper[4827]: I0126 09:07:36.996941 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af860f12-2015-4dc3-9511-ab0bf3698c65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.008110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af860f12-2015-4dc3-9511-ab0bf3698c65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.012691 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af860f12-2015-4dc3-9511-ab0bf3698c65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dzc4h\" (UID: \"af860f12-2015-4dc3-9511-ab0bf3698c65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.057383 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=64.057365591 podStartE2EDuration="1m4.057365591s" podCreationTimestamp="2026-01-26 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:37.057016551 +0000 UTC m=+85.705688370" watchObservedRunningTime="2026-01-26 09:07:37.057365591 +0000 UTC m=+85.706037410" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.107031 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.108220 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v7qpk" podStartSLOduration=61.108206314 podStartE2EDuration="1m1.108206314s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:37.095606983 +0000 UTC m=+85.744278802" watchObservedRunningTime="2026-01-26 09:07:37.108206314 +0000 UTC m=+85.756878133" Jan 26 09:07:37 crc kubenswrapper[4827]: W0126 09:07:37.119130 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf860f12_2015_4dc3_9511_ab0bf3698c65.slice/crio-cfa85757f738ae3b9cf71a71f983da12953875e051a35a29a295cf734c1b798c WatchSource:0}: Error finding container cfa85757f738ae3b9cf71a71f983da12953875e051a35a29a295cf734c1b798c: Status 404 returned error can't find the container with id cfa85757f738ae3b9cf71a71f983da12953875e051a35a29a295cf734c1b798c Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.139108 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" event={"ID":"af860f12-2015-4dc3-9511-ab0bf3698c65","Type":"ContainerStarted","Data":"cfa85757f738ae3b9cf71a71f983da12953875e051a35a29a295cf734c1b798c"} Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.149531 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.149515218 podStartE2EDuration="1m7.149515218s" podCreationTimestamp="2026-01-26 09:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:37.135187502 +0000 UTC m=+85.783859321" watchObservedRunningTime="2026-01-26 09:07:37.149515218 +0000 UTC m=+85.798187037" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.702225 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:37 crc kubenswrapper[4827]: E0126 09:07:37.702355 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.734167 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:03:53.163038496 +0000 UTC Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.734219 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 09:07:37 crc kubenswrapper[4827]: I0126 09:07:37.741022 4827 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 09:07:38 crc kubenswrapper[4827]: I0126 09:07:38.143104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" event={"ID":"af860f12-2015-4dc3-9511-ab0bf3698c65","Type":"ContainerStarted","Data":"4df49b4991e4f4797097076d2573929c90afc68aa7c2c931bb1884f383f00976"} Jan 26 09:07:38 crc kubenswrapper[4827]: I0126 09:07:38.162256 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dzc4h" podStartSLOduration=62.162237128 podStartE2EDuration="1m2.162237128s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:07:38.162004852 +0000 UTC m=+86.810676671" watchObservedRunningTime="2026-01-26 09:07:38.162237128 +0000 UTC m=+86.810908947" Jan 26 09:07:38 crc kubenswrapper[4827]: I0126 09:07:38.701869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:38 crc kubenswrapper[4827]: I0126 09:07:38.701901 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:38 crc kubenswrapper[4827]: I0126 09:07:38.701901 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:38 crc kubenswrapper[4827]: E0126 09:07:38.702397 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:38 crc kubenswrapper[4827]: E0126 09:07:38.702267 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:38 crc kubenswrapper[4827]: E0126 09:07:38.702626 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:39 crc kubenswrapper[4827]: I0126 09:07:39.702515 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:39 crc kubenswrapper[4827]: E0126 09:07:39.702980 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:40 crc kubenswrapper[4827]: I0126 09:07:40.702403 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:40 crc kubenswrapper[4827]: I0126 09:07:40.702431 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:40 crc kubenswrapper[4827]: E0126 09:07:40.702512 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:40 crc kubenswrapper[4827]: I0126 09:07:40.702403 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:40 crc kubenswrapper[4827]: E0126 09:07:40.702709 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:40 crc kubenswrapper[4827]: E0126 09:07:40.702738 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:41 crc kubenswrapper[4827]: I0126 09:07:41.702144 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:41 crc kubenswrapper[4827]: E0126 09:07:41.704387 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:42 crc kubenswrapper[4827]: I0126 09:07:42.702235 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:42 crc kubenswrapper[4827]: I0126 09:07:42.702237 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:42 crc kubenswrapper[4827]: E0126 09:07:42.702969 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:42 crc kubenswrapper[4827]: E0126 09:07:42.702889 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:42 crc kubenswrapper[4827]: I0126 09:07:42.702257 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:42 crc kubenswrapper[4827]: E0126 09:07:42.703064 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:43 crc kubenswrapper[4827]: I0126 09:07:43.702922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:43 crc kubenswrapper[4827]: E0126 09:07:43.703155 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:44 crc kubenswrapper[4827]: I0126 09:07:44.702683 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:44 crc kubenswrapper[4827]: I0126 09:07:44.702732 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:44 crc kubenswrapper[4827]: E0126 09:07:44.702907 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:44 crc kubenswrapper[4827]: E0126 09:07:44.703037 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:44 crc kubenswrapper[4827]: I0126 09:07:44.702730 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:44 crc kubenswrapper[4827]: E0126 09:07:44.704262 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:45 crc kubenswrapper[4827]: I0126 09:07:45.702537 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:45 crc kubenswrapper[4827]: E0126 09:07:45.702798 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:45 crc kubenswrapper[4827]: I0126 09:07:45.703472 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:07:45 crc kubenswrapper[4827]: E0126 09:07:45.703692 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:07:46 crc kubenswrapper[4827]: I0126 09:07:46.702300 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:46 crc kubenswrapper[4827]: E0126 09:07:46.702453 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:46 crc kubenswrapper[4827]: I0126 09:07:46.702578 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:46 crc kubenswrapper[4827]: E0126 09:07:46.702689 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:46 crc kubenswrapper[4827]: I0126 09:07:46.702729 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:46 crc kubenswrapper[4827]: E0126 09:07:46.702800 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:47 crc kubenswrapper[4827]: I0126 09:07:47.702443 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:47 crc kubenswrapper[4827]: E0126 09:07:47.702572 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:48 crc kubenswrapper[4827]: I0126 09:07:48.702856 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:48 crc kubenswrapper[4827]: I0126 09:07:48.702908 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:48 crc kubenswrapper[4827]: I0126 09:07:48.702872 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:48 crc kubenswrapper[4827]: E0126 09:07:48.702999 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:48 crc kubenswrapper[4827]: E0126 09:07:48.703134 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:48 crc kubenswrapper[4827]: E0126 09:07:48.703212 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:49 crc kubenswrapper[4827]: I0126 09:07:49.702233 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:49 crc kubenswrapper[4827]: E0126 09:07:49.702368 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:50 crc kubenswrapper[4827]: I0126 09:07:50.702447 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:50 crc kubenswrapper[4827]: I0126 09:07:50.702489 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:50 crc kubenswrapper[4827]: I0126 09:07:50.702572 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:50 crc kubenswrapper[4827]: E0126 09:07:50.702822 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:50 crc kubenswrapper[4827]: E0126 09:07:50.703132 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:50 crc kubenswrapper[4827]: E0126 09:07:50.702960 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:51 crc kubenswrapper[4827]: I0126 09:07:51.704028 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:51 crc kubenswrapper[4827]: E0126 09:07:51.704283 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:52 crc kubenswrapper[4827]: I0126 09:07:52.701921 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:52 crc kubenswrapper[4827]: I0126 09:07:52.702027 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:52 crc kubenswrapper[4827]: I0126 09:07:52.702031 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:52 crc kubenswrapper[4827]: E0126 09:07:52.702115 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:52 crc kubenswrapper[4827]: E0126 09:07:52.702229 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:52 crc kubenswrapper[4827]: E0126 09:07:52.702359 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:53 crc kubenswrapper[4827]: I0126 09:07:53.702550 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:53 crc kubenswrapper[4827]: E0126 09:07:53.702739 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:54 crc kubenswrapper[4827]: I0126 09:07:54.702685 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:54 crc kubenswrapper[4827]: I0126 09:07:54.702878 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:54 crc kubenswrapper[4827]: E0126 09:07:54.703061 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:54 crc kubenswrapper[4827]: E0126 09:07:54.703202 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:54 crc kubenswrapper[4827]: I0126 09:07:54.703292 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:54 crc kubenswrapper[4827]: E0126 09:07:54.703445 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:54 crc kubenswrapper[4827]: I0126 09:07:54.982868 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:54 crc kubenswrapper[4827]: E0126 09:07:54.983024 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:54 crc kubenswrapper[4827]: E0126 09:07:54.983266 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs podName:a9bc714d-5eac-4b0e-8832-f65f57bffa1e nodeName:}" failed. No retries permitted until 2026-01-26 09:08:58.98324859 +0000 UTC m=+167.631920409 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs") pod "network-metrics-daemon-k927z" (UID: "a9bc714d-5eac-4b0e-8832-f65f57bffa1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 09:07:55 crc kubenswrapper[4827]: I0126 09:07:55.702033 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:55 crc kubenswrapper[4827]: E0126 09:07:55.702449 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:56 crc kubenswrapper[4827]: I0126 09:07:56.702136 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:56 crc kubenswrapper[4827]: E0126 09:07:56.702840 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:56 crc kubenswrapper[4827]: I0126 09:07:56.702167 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:56 crc kubenswrapper[4827]: I0126 09:07:56.702152 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:56 crc kubenswrapper[4827]: E0126 09:07:56.703136 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:56 crc kubenswrapper[4827]: E0126 09:07:56.703408 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:57 crc kubenswrapper[4827]: I0126 09:07:57.702332 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:57 crc kubenswrapper[4827]: E0126 09:07:57.703044 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:58 crc kubenswrapper[4827]: I0126 09:07:58.702355 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:07:58 crc kubenswrapper[4827]: I0126 09:07:58.702407 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:07:58 crc kubenswrapper[4827]: E0126 09:07:58.702465 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:07:58 crc kubenswrapper[4827]: I0126 09:07:58.702407 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:07:58 crc kubenswrapper[4827]: E0126 09:07:58.702594 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:07:58 crc kubenswrapper[4827]: E0126 09:07:58.702523 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:07:59 crc kubenswrapper[4827]: I0126 09:07:59.702799 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:07:59 crc kubenswrapper[4827]: E0126 09:07:59.703023 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:07:59 crc kubenswrapper[4827]: I0126 09:07:59.704191 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:07:59 crc kubenswrapper[4827]: E0126 09:07:59.704501 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9xkm_openshift-ovn-kubernetes(3ba16376-c20a-411b-b45a-d7e718fbbac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" Jan 26 09:08:00 crc kubenswrapper[4827]: I0126 09:08:00.702419 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:00 crc kubenswrapper[4827]: I0126 09:08:00.702516 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:00 crc kubenswrapper[4827]: I0126 09:08:00.702420 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:00 crc kubenswrapper[4827]: E0126 09:08:00.702630 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:00 crc kubenswrapper[4827]: E0126 09:08:00.702879 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:00 crc kubenswrapper[4827]: E0126 09:08:00.702965 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:01 crc kubenswrapper[4827]: I0126 09:08:01.701981 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:01 crc kubenswrapper[4827]: E0126 09:08:01.704223 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:02 crc kubenswrapper[4827]: I0126 09:08:02.701984 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:02 crc kubenswrapper[4827]: I0126 09:08:02.701977 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:02 crc kubenswrapper[4827]: I0126 09:08:02.702005 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:02 crc kubenswrapper[4827]: E0126 09:08:02.702661 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:02 crc kubenswrapper[4827]: E0126 09:08:02.702623 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:02 crc kubenswrapper[4827]: E0126 09:08:02.702425 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:03 crc kubenswrapper[4827]: I0126 09:08:03.705175 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:03 crc kubenswrapper[4827]: E0126 09:08:03.705361 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:04 crc kubenswrapper[4827]: I0126 09:08:04.702599 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:04 crc kubenswrapper[4827]: I0126 09:08:04.702690 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:04 crc kubenswrapper[4827]: I0126 09:08:04.702618 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:04 crc kubenswrapper[4827]: E0126 09:08:04.702880 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:04 crc kubenswrapper[4827]: E0126 09:08:04.703023 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:04 crc kubenswrapper[4827]: E0126 09:08:04.703126 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:05 crc kubenswrapper[4827]: I0126 09:08:05.703061 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:05 crc kubenswrapper[4827]: E0126 09:08:05.703289 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:06 crc kubenswrapper[4827]: I0126 09:08:06.702681 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:06 crc kubenswrapper[4827]: I0126 09:08:06.702674 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:06 crc kubenswrapper[4827]: I0126 09:08:06.702880 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:06 crc kubenswrapper[4827]: E0126 09:08:06.702996 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:06 crc kubenswrapper[4827]: E0126 09:08:06.703138 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:06 crc kubenswrapper[4827]: E0126 09:08:06.703252 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:07 crc kubenswrapper[4827]: I0126 09:08:07.702371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:07 crc kubenswrapper[4827]: E0126 09:08:07.702521 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:08 crc kubenswrapper[4827]: I0126 09:08:08.701950 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:08 crc kubenswrapper[4827]: I0126 09:08:08.701940 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:08 crc kubenswrapper[4827]: E0126 09:08:08.702471 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:08 crc kubenswrapper[4827]: I0126 09:08:08.701976 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:08 crc kubenswrapper[4827]: E0126 09:08:08.702772 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:08 crc kubenswrapper[4827]: E0126 09:08:08.702511 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:09 crc kubenswrapper[4827]: I0126 09:08:09.702199 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:09 crc kubenswrapper[4827]: E0126 09:08:09.702371 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.247153 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/1.log" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.247890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/0.log" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.247958 4827 generic.go:334] "Generic (PLEG): container finished" podID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" containerID="b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a" exitCode=1 Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.247995 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerDied","Data":"b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a"} Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.248039 4827 scope.go:117] "RemoveContainer" containerID="87ca65fdc34c559bd29ff68794c53fea7dcf2cbbc16dc6d8ea56b3b627cef99f" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.248714 4827 scope.go:117] "RemoveContainer" containerID="b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a" Jan 26 09:08:10 crc kubenswrapper[4827]: E0126 09:08:10.249024 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-v7qpk_openshift-multus(e83a7bed-4909-4830-89e5-13c9a0bfcaf6)\"" pod="openshift-multus/multus-v7qpk" podUID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.702833 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.702970 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:10 crc kubenswrapper[4827]: E0126 09:08:10.703055 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.703138 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:10 crc kubenswrapper[4827]: E0126 09:08:10.703248 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:10 crc kubenswrapper[4827]: E0126 09:08:10.703620 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:10 crc kubenswrapper[4827]: I0126 09:08:10.704049 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.252428 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/1.log" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.255042 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/3.log" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.257902 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerStarted","Data":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.258471 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.490818 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podStartSLOduration=95.490798502 podStartE2EDuration="1m35.490798502s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:11.294243665 +0000 UTC m=+119.942915504" watchObservedRunningTime="2026-01-26 09:08:11.490798502 +0000 UTC m=+120.139470321" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.491519 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k927z"] Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.491621 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:11 crc kubenswrapper[4827]: E0126 09:08:11.491779 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:11 crc kubenswrapper[4827]: E0126 09:08:11.616948 4827 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 09:08:11 crc kubenswrapper[4827]: I0126 09:08:11.702522 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:11 crc kubenswrapper[4827]: E0126 09:08:11.703087 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:12 crc kubenswrapper[4827]: E0126 09:08:12.064344 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 09:08:12 crc kubenswrapper[4827]: I0126 09:08:12.701819 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:12 crc kubenswrapper[4827]: I0126 09:08:12.701819 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:12 crc kubenswrapper[4827]: E0126 09:08:12.701960 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:12 crc kubenswrapper[4827]: E0126 09:08:12.702016 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:13 crc kubenswrapper[4827]: I0126 09:08:13.703987 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:13 crc kubenswrapper[4827]: E0126 09:08:13.704240 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:13 crc kubenswrapper[4827]: I0126 09:08:13.705722 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:13 crc kubenswrapper[4827]: E0126 09:08:13.705844 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:14 crc kubenswrapper[4827]: I0126 09:08:14.702050 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:14 crc kubenswrapper[4827]: E0126 09:08:14.702455 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:14 crc kubenswrapper[4827]: I0126 09:08:14.702087 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:14 crc kubenswrapper[4827]: E0126 09:08:14.702840 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:15 crc kubenswrapper[4827]: I0126 09:08:15.702072 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:15 crc kubenswrapper[4827]: E0126 09:08:15.702235 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:15 crc kubenswrapper[4827]: I0126 09:08:15.702400 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:15 crc kubenswrapper[4827]: E0126 09:08:15.702442 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:16 crc kubenswrapper[4827]: I0126 09:08:16.337412 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:08:16 crc kubenswrapper[4827]: I0126 09:08:16.702518 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:16 crc kubenswrapper[4827]: E0126 09:08:16.702667 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:16 crc kubenswrapper[4827]: I0126 09:08:16.702711 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:16 crc kubenswrapper[4827]: E0126 09:08:16.702767 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:17 crc kubenswrapper[4827]: E0126 09:08:17.066055 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 09:08:17 crc kubenswrapper[4827]: I0126 09:08:17.702042 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:17 crc kubenswrapper[4827]: I0126 09:08:17.702065 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:17 crc kubenswrapper[4827]: E0126 09:08:17.702206 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:17 crc kubenswrapper[4827]: E0126 09:08:17.702323 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:18 crc kubenswrapper[4827]: I0126 09:08:18.702458 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:18 crc kubenswrapper[4827]: I0126 09:08:18.702511 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:18 crc kubenswrapper[4827]: E0126 09:08:18.702671 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:18 crc kubenswrapper[4827]: E0126 09:08:18.702847 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:19 crc kubenswrapper[4827]: I0126 09:08:19.702771 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:19 crc kubenswrapper[4827]: I0126 09:08:19.702804 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:19 crc kubenswrapper[4827]: E0126 09:08:19.702930 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:19 crc kubenswrapper[4827]: E0126 09:08:19.703028 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:20 crc kubenswrapper[4827]: I0126 09:08:20.702533 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:20 crc kubenswrapper[4827]: I0126 09:08:20.702596 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:20 crc kubenswrapper[4827]: E0126 09:08:20.702699 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:20 crc kubenswrapper[4827]: E0126 09:08:20.702812 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:21 crc kubenswrapper[4827]: I0126 09:08:21.704227 4827 scope.go:117] "RemoveContainer" containerID="b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a" Jan 26 09:08:21 crc kubenswrapper[4827]: I0126 09:08:21.704331 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:21 crc kubenswrapper[4827]: I0126 09:08:21.704522 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:21 crc kubenswrapper[4827]: E0126 09:08:21.704706 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:21 crc kubenswrapper[4827]: E0126 09:08:21.704832 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:22 crc kubenswrapper[4827]: E0126 09:08:22.066975 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 09:08:22 crc kubenswrapper[4827]: I0126 09:08:22.296131 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/1.log" Jan 26 09:08:22 crc kubenswrapper[4827]: I0126 09:08:22.296996 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerStarted","Data":"1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d"} Jan 26 09:08:22 crc kubenswrapper[4827]: I0126 09:08:22.702103 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:22 crc kubenswrapper[4827]: E0126 09:08:22.702258 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:22 crc kubenswrapper[4827]: I0126 09:08:22.702678 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:22 crc kubenswrapper[4827]: E0126 09:08:22.703017 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:23 crc kubenswrapper[4827]: I0126 09:08:23.702487 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:23 crc kubenswrapper[4827]: I0126 09:08:23.702511 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:23 crc kubenswrapper[4827]: E0126 09:08:23.702820 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:23 crc kubenswrapper[4827]: E0126 09:08:23.702658 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:24 crc kubenswrapper[4827]: I0126 09:08:24.702490 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:24 crc kubenswrapper[4827]: I0126 09:08:24.702530 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:24 crc kubenswrapper[4827]: E0126 09:08:24.702752 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:24 crc kubenswrapper[4827]: E0126 09:08:24.702904 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:25 crc kubenswrapper[4827]: I0126 09:08:25.702450 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:25 crc kubenswrapper[4827]: E0126 09:08:25.702580 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 09:08:25 crc kubenswrapper[4827]: I0126 09:08:25.702606 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:25 crc kubenswrapper[4827]: E0126 09:08:25.702717 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k927z" podUID="a9bc714d-5eac-4b0e-8832-f65f57bffa1e" Jan 26 09:08:26 crc kubenswrapper[4827]: I0126 09:08:26.702723 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:26 crc kubenswrapper[4827]: I0126 09:08:26.702871 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:26 crc kubenswrapper[4827]: E0126 09:08:26.702925 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 09:08:26 crc kubenswrapper[4827]: E0126 09:08:26.703178 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.702325 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.702449 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.707349 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.707437 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.707726 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 09:08:27 crc kubenswrapper[4827]: I0126 09:08:27.707743 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.473314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.523381 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.523930 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.528622 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.529148 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.533504 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rtv5j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.534240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.535072 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.536143 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.537131 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.539924 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.540331 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.541135 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.541283 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgv9x"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.541887 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.541935 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.542103 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-slntw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.542261 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.542498 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.546324 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.546461 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.546542 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.553483 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.554063 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.554330 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.554417 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.573207 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.580159 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.580795 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.581037 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.581116 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.581140 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.581375 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.581838 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582124 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582265 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582460 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582530 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582557 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582588 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.582837 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583082 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583498 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583102 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583607 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583627 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583119 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583748 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583127 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583954 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583321 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.583338 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.584340 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.584523 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.584842 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.585198 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.585302 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.585630 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.586127 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.586247 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.586347 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.586724 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.586735 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.587283 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.588104 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwz57"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.593502 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.601750 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.605341 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.605653 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.605909 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2vwz5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.606172 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.606815 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.606995 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.607204 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.607253 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.609788 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.610297 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.610579 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.610684 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.610690 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.610996 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.612748 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.612862 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.615353 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.615658 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.615693 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.616522 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.616563 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-st6nr"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.618668 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.618726 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.619112 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.619816 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.619902 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.620079 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.620199 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.620693 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.621356 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.629660 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.630240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.631182 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.633700 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.634228 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.634665 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.635147 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.640947 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.641553 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.641782 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.642030 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.642217 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.642375 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.643231 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.643376 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.643697 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.643971 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mvwnc"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.644434 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.645551 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.645952 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.647440 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5g848"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.648247 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.649555 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5724v"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.650038 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.651262 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.651748 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.652011 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.652746 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.654172 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.654961 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.660544 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.660910 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.661030 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.678068 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.678230 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.678313 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.678510 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.678685 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.680046 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.680738 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.681286 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.681972 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.682153 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.681277 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ztlnq"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.683593 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.683707 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.684023 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.684268 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.685396 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.685570 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.685615 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.685817 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.686149 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.686265 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.686350 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.686477 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.690309 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db426"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.690670 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.693017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.696932 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.697105 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.697202 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.697380 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.697498 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.697763 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.698246 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.698325 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.699362 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.702794 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.703283 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.719130 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.719874 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.720006 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.720421 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.720539 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2vwz5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.720713 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.721616 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-trusted-ca\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722544 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722579 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722671 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-serving-cert\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722765 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgkk\" (UniqueName: \"kubernetes.io/projected/53459cff-b8c1-495b-8d5e-49d54a77fb30-kube-api-access-6vgkk\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722780 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-images\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.722958 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723034 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-serving-cert\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723164 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-config\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723192 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-profile-collector-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723269 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723317 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723339 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xghs\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-kube-api-access-6xghs\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723370 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9d7\" (UniqueName: \"kubernetes.io/projected/500979f1-7a4a-4d40-8391-6df8d92f803a-kube-api-access-tn9d7\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723396 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2c8d789-5fe6-4f51-b6b7-7a986933867d-serving-cert\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500979f1-7a4a-4d40-8391-6df8d92f803a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a0b0e-04a4-4bc1-b865-46613288436b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723460 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-serving-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723478 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723500 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723523 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723543 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw5j5\" (UniqueName: \"kubernetes.io/projected/2c409bff-4b8d-4296-91a4-5436aadab19b-kube-api-access-jw5j5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723561 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723593 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-machine-approver-tls\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723613 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9cl\" (UniqueName: \"kubernetes.io/projected/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-kube-api-access-9w9cl\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.729713 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.729746 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.729821 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.723630 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-srv-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735655 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-client\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735681 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/463a0b0e-04a4-4bc1-b865-46613288436b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735706 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhbz\" (UniqueName: \"kubernetes.io/projected/463a0b0e-04a4-4bc1-b865-46613288436b-kube-api-access-rwhbz\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735731 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1001135b-5055-4366-a41e-84019fd4666b-metrics-tls\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735751 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-image-import-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-config\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735828 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c409bff-4b8d-4296-91a4-5436aadab19b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735850 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-policies\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735878 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-encryption-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735895 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzdg\" (UniqueName: \"kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735910 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-metrics-tls\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735927 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-auth-proxy-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735954 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2l2\" (UniqueName: \"kubernetes.io/projected/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-kube-api-access-9c2l2\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735970 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.735987 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7n8\" (UniqueName: \"kubernetes.io/projected/a2c8d789-5fe6-4f51-b6b7-7a986933867d-kube-api-access-wx7n8\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736020 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-client\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736040 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fk6g\" (UniqueName: \"kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736054 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-node-pullsecrets\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit-dir\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736100 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-dir\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736116 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736133 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl8tm\" (UniqueName: \"kubernetes.io/projected/d4f90fc1-5287-4e23-9f4a-4e194db3610b-kube-api-access-bl8tm\") pod \"downloads-7954f5f757-2vwz5\" (UID: \"d4f90fc1-5287-4e23-9f4a-4e194db3610b\") " pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736161 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl89k\" (UniqueName: \"kubernetes.io/projected/1001135b-5055-4366-a41e-84019fd4666b-kube-api-access-sl89k\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736193 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736213 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-encryption-config\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736240 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c409bff-4b8d-4296-91a4-5436aadab19b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736259 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/00f5a10b-1353-4060-a2b0-7cc7d9980817-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736275 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-config\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500979f1-7a4a-4d40-8391-6df8d92f803a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736340 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sbc\" (UniqueName: \"kubernetes.io/projected/5e95674a-44b2-42a1-95fd-af905608305b-kube-api-access-h5sbc\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736382 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736400 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-trusted-ca-bundle\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736431 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736459 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.736484 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrx4\" (UniqueName: \"kubernetes.io/projected/00f5a10b-1353-4060-a2b0-7cc7d9980817-kube-api-access-zwrx4\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.738752 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.738815 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwz57"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.741843 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.742491 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.743527 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.744070 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.745540 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.745686 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.746850 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-slntw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.748723 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.749290 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.750889 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rtv5j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.758839 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.759534 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.760487 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.771719 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.771373 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-skgbv"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.772551 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgv9x"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.772621 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.777277 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.777454 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pwkmz"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.778182 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.780189 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.787134 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.787753 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5g848"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.793466 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.797819 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mvwnc"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.800145 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8bdhj"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.801305 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.801470 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.802056 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-st6nr"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.805100 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6rb7x"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.805686 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.806931 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.807146 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ztlnq"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.809408 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.810830 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.812199 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.813241 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.816576 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.818432 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.820358 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8bdhj"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.822901 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.825130 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.826141 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.827762 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.828928 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db426"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.830323 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.831303 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.832202 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.833367 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-skgbv"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.835022 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837173 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837418 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837480 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-metrics-certs\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837505 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec85065e-6410-43f8-9b49-bc0d1956b92d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837531 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9d7\" (UniqueName: \"kubernetes.io/projected/500979f1-7a4a-4d40-8391-6df8d92f803a-kube-api-access-tn9d7\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xghs\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-kube-api-access-6xghs\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837573 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5ht\" (UniqueName: \"kubernetes.io/projected/ec85065e-6410-43f8-9b49-bc0d1956b92d-kube-api-access-4x5ht\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837596 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2c8d789-5fe6-4f51-b6b7-7a986933867d-serving-cert\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837619 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500979f1-7a4a-4d40-8391-6df8d92f803a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837687 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837714 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-serving-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837771 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a0b0e-04a4-4bc1-b865-46613288436b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837793 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837813 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-images\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837836 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw5j5\" (UniqueName: \"kubernetes.io/projected/2c409bff-4b8d-4296-91a4-5436aadab19b-kube-api-access-jw5j5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837969 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-srv-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838016 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-machine-approver-tls\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9cl\" (UniqueName: \"kubernetes.io/projected/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-kube-api-access-9w9cl\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-client\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838079 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/463a0b0e-04a4-4bc1-b865-46613288436b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838102 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwts\" (UniqueName: \"kubernetes.io/projected/c082a1b4-a8cb-4bd5-9034-1678368030c0-kube-api-access-4jwts\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838124 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8af31d9-704f-494e-be0e-df5743e8c0c0-proxy-tls\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwhbz\" (UniqueName: \"kubernetes.io/projected/463a0b0e-04a4-4bc1-b865-46613288436b-kube-api-access-rwhbz\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1001135b-5055-4366-a41e-84019fd4666b-metrics-tls\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838259 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-default-certificate\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838282 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-image-import-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838323 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-config\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838366 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c409bff-4b8d-4296-91a4-5436aadab19b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-policies\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-encryption-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvzdg\" (UniqueName: \"kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838450 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-metrics-tls\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838471 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-stats-auth\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838512 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-auth-proxy-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838564 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c082a1b4-a8cb-4bd5-9034-1678368030c0-service-ca-bundle\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838589 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c2l2\" (UniqueName: \"kubernetes.io/projected/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-kube-api-access-9c2l2\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838610 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838677 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838705 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx7n8\" (UniqueName: \"kubernetes.io/projected/a2c8d789-5fe6-4f51-b6b7-7a986933867d-kube-api-access-wx7n8\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838727 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-client\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838751 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fk6g\" (UniqueName: \"kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838774 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-node-pullsecrets\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838792 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit-dir\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838814 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838834 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838857 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-dir\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838878 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838900 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl8tm\" (UniqueName: \"kubernetes.io/projected/d4f90fc1-5287-4e23-9f4a-4e194db3610b-kube-api-access-bl8tm\") pod \"downloads-7954f5f757-2vwz5\" (UID: \"d4f90fc1-5287-4e23-9f4a-4e194db3610b\") " pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838921 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl89k\" (UniqueName: \"kubernetes.io/projected/1001135b-5055-4366-a41e-84019fd4666b-kube-api-access-sl89k\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838942 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838964 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-encryption-config\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.838996 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c409bff-4b8d-4296-91a4-5436aadab19b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839018 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/00f5a10b-1353-4060-a2b0-7cc7d9980817-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839040 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-config\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839060 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500979f1-7a4a-4d40-8391-6df8d92f803a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839083 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb303a5e-d8a9-45be-8984-534092b4c2b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839105 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839128 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5sbc\" (UniqueName: \"kubernetes.io/projected/5e95674a-44b2-42a1-95fd-af905608305b-kube-api-access-h5sbc\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839149 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839177 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-trusted-ca-bundle\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839199 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhr7\" (UniqueName: \"kubernetes.io/projected/eb303a5e-d8a9-45be-8984-534092b4c2b7-kube-api-access-4bhr7\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839230 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839251 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxxw\" (UniqueName: \"kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839274 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839294 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839318 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrx4\" (UniqueName: \"kubernetes.io/projected/00f5a10b-1353-4060-a2b0-7cc7d9980817-kube-api-access-zwrx4\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839339 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrq9f\" (UniqueName: \"kubernetes.io/projected/b8af31d9-704f-494e-be0e-df5743e8c0c0-kube-api-access-qrq9f\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839359 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec85065e-6410-43f8-9b49-bc0d1956b92d-proxy-tls\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839379 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-trusted-ca\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839421 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839484 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-serving-cert\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839508 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839529 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgkk\" (UniqueName: \"kubernetes.io/projected/53459cff-b8c1-495b-8d5e-49d54a77fb30-kube-api-access-6vgkk\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839550 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-images\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839580 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-serving-cert\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839600 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-config\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-profile-collector-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.839956 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-q57l9"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.840716 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pwkmz"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.840899 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.841536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-node-pullsecrets\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.841601 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit-dir\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.841666 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-dir\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.841776 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.842590 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.842753 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6rb7x"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.842909 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-image-import-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.843729 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-auth-proxy-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.843751 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.837886 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.844526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.845017 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-serving-ca\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.845177 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a0b0e-04a4-4bc1-b865-46613288436b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.845882 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-config\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.846055 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c409bff-4b8d-4296-91a4-5436aadab19b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.846189 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-config\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.846437 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.846790 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-srv-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.847284 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.847537 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-audit-policies\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.848289 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.848572 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/463a0b0e-04a4-4bc1-b865-46613288436b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.848794 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-etcd-client\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.848960 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-machine-approver-tls\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.849010 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.849598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.849662 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.850107 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5"] Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.850064 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.850443 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c8d789-5fe6-4f51-b6b7-7a986933867d-config\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.850720 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.851082 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500979f1-7a4a-4d40-8391-6df8d92f803a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.851470 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e95674a-44b2-42a1-95fd-af905608305b-profile-collector-cert\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.851521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.851611 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-audit\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.851782 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-config\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.852525 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f5a10b-1353-4060-a2b0-7cc7d9980817-images\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.852591 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53459cff-b8c1-495b-8d5e-49d54a77fb30-trusted-ca-bundle\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.852812 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-etcd-client\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853235 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2c8d789-5fe6-4f51-b6b7-7a986933867d-serving-cert\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853335 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c409bff-4b8d-4296-91a4-5436aadab19b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853369 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500979f1-7a4a-4d40-8391-6df8d92f803a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853695 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-encryption-config\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853753 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/00f5a10b-1353-4060-a2b0-7cc7d9980817-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853843 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.853922 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-encryption-config\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.854208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-serving-cert\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.855374 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.855834 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53459cff-b8c1-495b-8d5e-49d54a77fb30-serving-cert\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.863275 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1001135b-5055-4366-a41e-84019fd4666b-metrics-tls\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.866404 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.886782 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.908510 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.926858 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940031 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940062 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-images\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jwts\" (UniqueName: \"kubernetes.io/projected/c082a1b4-a8cb-4bd5-9034-1678368030c0-kube-api-access-4jwts\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940123 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8af31d9-704f-494e-be0e-df5743e8c0c0-proxy-tls\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940145 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-default-certificate\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940181 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-stats-auth\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940195 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940209 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940241 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c082a1b4-a8cb-4bd5-9034-1678368030c0-service-ca-bundle\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940292 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940314 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940342 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb303a5e-d8a9-45be-8984-534092b4c2b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940365 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940385 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhr7\" (UniqueName: \"kubernetes.io/projected/eb303a5e-d8a9-45be-8984-534092b4c2b7-kube-api-access-4bhr7\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940415 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxxw\" (UniqueName: \"kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940431 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec85065e-6410-43f8-9b49-bc0d1956b92d-proxy-tls\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrq9f\" (UniqueName: \"kubernetes.io/projected/b8af31d9-704f-494e-be0e-df5743e8c0c0-kube-api-access-qrq9f\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940492 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-metrics-certs\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940511 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec85065e-6410-43f8-9b49-bc0d1956b92d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940537 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5ht\" (UniqueName: \"kubernetes.io/projected/ec85065e-6410-43f8-9b49-bc0d1956b92d-kube-api-access-4x5ht\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.940908 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.941228 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.941411 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.941916 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.942221 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec85065e-6410-43f8-9b49-bc0d1956b92d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.943084 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.943932 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.944474 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb303a5e-d8a9-45be-8984-534092b4c2b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.945746 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.945779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.977861 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 09:08:28 crc kubenswrapper[4827]: I0126 09:08:28.988994 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.007186 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.025784 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.046002 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.066125 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.085956 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.113256 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.126433 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.146469 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.153344 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-metrics-tls\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.166659 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.191316 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.200422 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-trusted-ca\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.206850 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.225379 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.246680 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.266163 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.275916 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-default-certificate\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.286887 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.293993 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-stats-auth\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.306830 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.314992 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c082a1b4-a8cb-4bd5-9034-1678368030c0-metrics-certs\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.326007 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.346214 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.351976 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c082a1b4-a8cb-4bd5-9034-1678368030c0-service-ca-bundle\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.366679 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.392688 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.406958 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.426888 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.446848 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.466399 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.486148 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.493526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8af31d9-704f-494e-be0e-df5743e8c0c0-proxy-tls\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.506223 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.511708 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8af31d9-704f-494e-be0e-df5743e8c0c0-images\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.526170 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.534404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec85065e-6410-43f8-9b49-bc0d1956b92d-proxy-tls\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.546992 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.566555 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.586079 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.606801 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.626754 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.647086 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.666188 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.685918 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.704875 4827 request.go:700] Waited for 1.011625964s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.706862 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.726354 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.746901 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.766226 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.786296 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.806687 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.826289 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.850081 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.874575 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.886320 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.946692 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.965972 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 09:08:29 crc kubenswrapper[4827]: I0126 09:08:29.986855 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.006410 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.026868 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.047075 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.066672 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.086859 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.106156 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.126598 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.146380 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.166817 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.186327 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.206509 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.225881 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.245621 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.266448 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.285957 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.306818 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.326070 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.346382 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.366234 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.386171 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.407127 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.426338 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.446757 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.466501 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.486168 4827 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.506712 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.526705 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.546653 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.566253 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.587175 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.606845 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.627136 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.676290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fk6g\" (UniqueName: \"kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g\") pod \"route-controller-manager-6576b87f9c-sbrrs\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.682298 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9d7\" (UniqueName: \"kubernetes.io/projected/500979f1-7a4a-4d40-8391-6df8d92f803a-kube-api-access-tn9d7\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tfwt\" (UID: \"500979f1-7a4a-4d40-8391-6df8d92f803a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.702682 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xghs\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-kube-api-access-6xghs\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.725052 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwhbz\" (UniqueName: \"kubernetes.io/projected/463a0b0e-04a4-4bc1-b865-46613288436b-kube-api-access-rwhbz\") pod \"openshift-apiserver-operator-796bbdcf4f-l2m7j\" (UID: \"463a0b0e-04a4-4bc1-b865-46613288436b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.725192 4827 request.go:700] Waited for 1.882465634s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.728899 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.744135 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl8tm\" (UniqueName: \"kubernetes.io/projected/d4f90fc1-5287-4e23-9f4a-4e194db3610b-kube-api-access-bl8tm\") pod \"downloads-7954f5f757-2vwz5\" (UID: \"d4f90fc1-5287-4e23-9f4a-4e194db3610b\") " pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.772508 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl89k\" (UniqueName: \"kubernetes.io/projected/1001135b-5055-4366-a41e-84019fd4666b-kube-api-access-sl89k\") pod \"dns-operator-744455d44c-xwz57\" (UID: \"1001135b-5055-4366-a41e-84019fd4666b\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.787607 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c2l2\" (UniqueName: \"kubernetes.io/projected/8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01-kube-api-access-9c2l2\") pod \"apiserver-7bbb656c7d-cgbmh\" (UID: \"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.793805 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.802313 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw5j5\" (UniqueName: \"kubernetes.io/projected/2c409bff-4b8d-4296-91a4-5436aadab19b-kube-api-access-jw5j5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm24n\" (UID: \"2c409bff-4b8d-4296-91a4-5436aadab19b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.828618 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1282b8d2-fb85-4aa6-adf8-658f0fa77dee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5g848\" (UID: \"1282b8d2-fb85-4aa6-adf8-658f0fa77dee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.830470 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.842876 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.848983 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9cl\" (UniqueName: \"kubernetes.io/projected/7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2-kube-api-access-9w9cl\") pod \"machine-approver-56656f9798-6vpjj\" (UID: \"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.867912 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx7n8\" (UniqueName: \"kubernetes.io/projected/a2c8d789-5fe6-4f51-b6b7-7a986933867d-kube-api-access-wx7n8\") pod \"authentication-operator-69f744f599-bgv9x\" (UID: \"a2c8d789-5fe6-4f51-b6b7-7a986933867d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.899118 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvzdg\" (UniqueName: \"kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg\") pod \"controller-manager-879f6c89f-jdttz\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.903956 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.912239 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.916598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55756cc-0888-4a99-bfdf-6f4a7eafa65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6z7p\" (UID: \"b55756cc-0888-4a99-bfdf-6f4a7eafa65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.935388 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrx4\" (UniqueName: \"kubernetes.io/projected/00f5a10b-1353-4060-a2b0-7cc7d9980817-kube-api-access-zwrx4\") pod \"machine-api-operator-5694c8668f-rtv5j\" (UID: \"00f5a10b-1353-4060-a2b0-7cc7d9980817\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.940102 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.949001 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgkk\" (UniqueName: \"kubernetes.io/projected/53459cff-b8c1-495b-8d5e-49d54a77fb30-kube-api-access-6vgkk\") pod \"apiserver-76f77b778f-slntw\" (UID: \"53459cff-b8c1-495b-8d5e-49d54a77fb30\") " pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.966261 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.982820 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5sbc\" (UniqueName: \"kubernetes.io/projected/5e95674a-44b2-42a1-95fd-af905608305b-kube-api-access-h5sbc\") pod \"catalog-operator-68c6474976-lzr6j\" (UID: \"5e95674a-44b2-42a1-95fd-af905608305b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.986275 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jwts\" (UniqueName: \"kubernetes.io/projected/c082a1b4-a8cb-4bd5-9034-1678368030c0-kube-api-access-4jwts\") pod \"router-default-5444994796-5724v\" (UID: \"c082a1b4-a8cb-4bd5-9034-1678368030c0\") " pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:30 crc kubenswrapper[4827]: I0126 09:08:30.988072 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.004686 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5ht\" (UniqueName: \"kubernetes.io/projected/ec85065e-6410-43f8-9b49-bc0d1956b92d-kube-api-access-4x5ht\") pod \"machine-config-controller-84d6567774-j2kjw\" (UID: \"ec85065e-6410-43f8-9b49-bc0d1956b92d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.005382 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.020204 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.026250 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxxw\" (UniqueName: \"kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw\") pod \"console-f9d7485db-cnfxn\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.027970 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.030346 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.041174 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrq9f\" (UniqueName: \"kubernetes.io/projected/b8af31d9-704f-494e-be0e-df5743e8c0c0-kube-api-access-qrq9f\") pod \"machine-config-operator-74547568cd-5crkc\" (UID: \"b8af31d9-704f-494e-be0e-df5743e8c0c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.042031 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.044539 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.061903 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.066321 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.069966 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhr7\" (UniqueName: \"kubernetes.io/projected/eb303a5e-d8a9-45be-8984-534092b4c2b7-kube-api-access-4bhr7\") pod \"cluster-samples-operator-665b6dd947-dw5zn\" (UID: \"eb303a5e-d8a9-45be-8984-534092b4c2b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165745 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165772 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165791 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165818 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stl6s\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165851 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165866 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165889 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f390725c-ffef-45f6-bbc2-0145b811f4d5-serving-cert\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165919 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165938 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-cabundle\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165965 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-key\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165981 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.165996 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cf93d4-596a-45aa-80e6-3cc69672a99f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166030 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6fm\" (UniqueName: \"kubernetes.io/projected/abf06eac-0589-4c69-9244-c9b5b35e0356-kube-api-access-gm6fm\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166062 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166081 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166114 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166146 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-config\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166190 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82cf93d4-596a-45aa-80e6-3cc69672a99f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166211 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166230 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1918f42b-ecd2-4800-85ab-fbc705acccd7-config\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166254 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1918f42b-ecd2-4800-85ab-fbc705acccd7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166314 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsd8c\" (UniqueName: \"kubernetes.io/projected/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-kube-api-access-lsd8c\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166328 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-webhook-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166352 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1918f42b-ecd2-4800-85ab-fbc705acccd7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.166394 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:31.666382689 +0000 UTC m=+140.315054508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166453 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abf06eac-0589-4c69-9244-c9b5b35e0356-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166471 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbvp\" (UniqueName: \"kubernetes.io/projected/f390725c-ffef-45f6-bbc2-0145b811f4d5-kube-api-access-vsbvp\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166497 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166511 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ssf\" (UniqueName: \"kubernetes.io/projected/a369544e-862d-41b6-928a-f1295ad7e93c-kube-api-access-86ssf\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166524 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-srv-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166539 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166552 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166568 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166582 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a369544e-862d-41b6-928a-f1295ad7e93c-tmpfs\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166628 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166669 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166683 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-trusted-ca\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166730 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96pv\" (UniqueName: \"kubernetes.io/projected/d496a86c-e689-4511-8e17-bf8a246668e5-kube-api-access-k96pv\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166749 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cf93d4-596a-45aa-80e6-3cc69672a99f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166786 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166800 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166847 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zlf\" (UniqueName: \"kubernetes.io/projected/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-kube-api-access-z2zlf\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.166862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.178345 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.203845 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.220560 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.259419 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwz57"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270279 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270426 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270459 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270476 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a369544e-862d-41b6-928a-f1295ad7e93c-tmpfs\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270506 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270527 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f71e3ecb-16f2-4a94-b392-36d84d69b692-cert\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270590 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfd9\" (UniqueName: \"kubernetes.io/projected/ed7d25d6-390f-45d1-ab3e-af28799a9a70-kube-api-access-vsfd9\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270678 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5l2\" (UniqueName: \"kubernetes.io/projected/716c8461-03fe-49c8-b3de-a254285cdd7d-kube-api-access-9g5l2\") pod \"migrator-59844c95c7-8xtx5\" (UID: \"716c8461-03fe-49c8-b3de-a254285cdd7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270723 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-trusted-ca\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270761 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-config\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270786 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdd9s\" (UniqueName: \"kubernetes.io/projected/f71e3ecb-16f2-4a94-b392-36d84d69b692-kube-api-access-rdd9s\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270812 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96pv\" (UniqueName: \"kubernetes.io/projected/d496a86c-e689-4511-8e17-bf8a246668e5-kube-api-access-k96pv\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270868 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-plugins-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270894 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-service-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270925 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgqv\" (UniqueName: \"kubernetes.io/projected/215ad331-8016-474c-a940-47d0619b69cb-kube-api-access-wsgqv\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270948 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cf93d4-596a-45aa-80e6-3cc69672a99f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.270970 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-etcd-client\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271019 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271042 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271069 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed7d25d6-390f-45d1-ab3e-af28799a9a70-metrics-tls\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271109 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2zlf\" (UniqueName: \"kubernetes.io/projected/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-kube-api-access-z2zlf\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271125 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271227 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271242 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271269 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271294 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrnq\" (UniqueName: \"kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271311 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271328 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-socket-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271348 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stl6s\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271377 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271397 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-csi-data-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271432 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271448 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-node-bootstrap-token\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271472 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f390725c-ffef-45f6-bbc2-0145b811f4d5-serving-cert\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271524 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-cabundle\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271574 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79daa322-67c6-43f1-920a-3aafd45b8b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271591 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-key\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271604 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-certs\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271658 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cf93d4-596a-45aa-80e6-3cc69672a99f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271672 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271686 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271718 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6fm\" (UniqueName: \"kubernetes.io/projected/abf06eac-0589-4c69-9244-c9b5b35e0356-kube-api-access-gm6fm\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271755 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271769 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-mountpoint-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271787 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271818 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4mx\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-kube-api-access-zf4mx\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271861 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-config\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271880 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e851e60-0051-41ea-8a88-b16366bac737-config\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271916 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-serving-cert\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271940 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvn7z\" (UniqueName: \"kubernetes.io/projected/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-kube-api-access-mvn7z\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271962 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82cf93d4-596a-45aa-80e6-3cc69672a99f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271979 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.271994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1918f42b-ecd2-4800-85ab-fbc705acccd7-config\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1918f42b-ecd2-4800-85ab-fbc705acccd7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272028 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e851e60-0051-41ea-8a88-b16366bac737-serving-cert\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272058 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7d25d6-390f-45d1-ab3e-af28799a9a70-config-volume\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-registration-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272134 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbj97\" (UniqueName: \"kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272161 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272202 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsd8c\" (UniqueName: \"kubernetes.io/projected/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-kube-api-access-lsd8c\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272217 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-webhook-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272235 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b82wv\" (UniqueName: \"kubernetes.io/projected/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-kube-api-access-b82wv\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272257 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1918f42b-ecd2-4800-85ab-fbc705acccd7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272271 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqd5h\" (UniqueName: \"kubernetes.io/projected/7993ae76-3a13-4a6f-963b-8c9d854b48ec-kube-api-access-zqd5h\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272307 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcrh\" (UniqueName: \"kubernetes.io/projected/3e851e60-0051-41ea-8a88-b16366bac737-kube-api-access-9bcrh\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272334 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79daa322-67c6-43f1-920a-3aafd45b8b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272351 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abf06eac-0589-4c69-9244-c9b5b35e0356-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272368 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbvp\" (UniqueName: \"kubernetes.io/projected/f390725c-ffef-45f6-bbc2-0145b811f4d5-kube-api-access-vsbvp\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272396 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272411 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ssf\" (UniqueName: \"kubernetes.io/projected/a369544e-862d-41b6-928a-f1295ad7e93c-kube-api-access-86ssf\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272426 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7cc\" (UniqueName: \"kubernetes.io/projected/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-kube-api-access-tw7cc\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-srv-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.272462 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.272576 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:31.772562387 +0000 UTC m=+140.421234206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.286029 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.287053 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.287735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1918f42b-ecd2-4800-85ab-fbc705acccd7-config\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.287737 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-cabundle\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.290796 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.292302 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a369544e-862d-41b6-928a-f1295ad7e93c-tmpfs\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.294353 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.295399 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.295764 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.297333 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-signing-key\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.300658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.303377 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-config\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.304384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.310304 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.312328 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.315338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cf93d4-596a-45aa-80e6-3cc69672a99f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.316472 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.320526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cf93d4-596a-45aa-80e6-3cc69672a99f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.320968 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.321813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.322149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-webhook-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.327567 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a369544e-862d-41b6-928a-f1295ad7e93c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.327565 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.330182 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f390725c-ffef-45f6-bbc2-0145b811f4d5-trusted-ca\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.332160 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.339811 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.342779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f390725c-ffef-45f6-bbc2-0145b811f4d5-serving-cert\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.345526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.347036 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.347272 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.347568 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.348602 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1918f42b-ecd2-4800-85ab-fbc705acccd7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.349018 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abf06eac-0589-4c69-9244-c9b5b35e0356-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.351202 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.351598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.353582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5724v" event={"ID":"c082a1b4-a8cb-4bd5-9034-1678368030c0","Type":"ContainerStarted","Data":"8e7f505c043d6e43f78dacad2e1b6c34a58f8adc0ef211d323802639443a84fe"} Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.353607 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d496a86c-e689-4511-8e17-bf8a246668e5-srv-cert\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.353733 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96pv\" (UniqueName: \"kubernetes.io/projected/d496a86c-e689-4511-8e17-bf8a246668e5-kube-api-access-k96pv\") pod \"olm-operator-6b444d44fb-vts6f\" (UID: \"d496a86c-e689-4511-8e17-bf8a246668e5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.357403 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.357470 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" event={"ID":"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2","Type":"ContainerStarted","Data":"4bf5c9929242383754f2ac74809a2a4564fdad408104dcbfb59c1ba31dac16f9"} Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.357658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1918f42b-ecd2-4800-85ab-fbc705acccd7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-52lj8\" (UID: \"1918f42b-ecd2-4800-85ab-fbc705acccd7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.373984 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-config\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdd9s\" (UniqueName: \"kubernetes.io/projected/f71e3ecb-16f2-4a94-b392-36d84d69b692-kube-api-access-rdd9s\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374028 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374677 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-plugins-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374732 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-service-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374751 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgqv\" (UniqueName: \"kubernetes.io/projected/215ad331-8016-474c-a940-47d0619b69cb-kube-api-access-wsgqv\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374766 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-etcd-client\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed7d25d6-390f-45d1-ab3e-af28799a9a70-metrics-tls\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.374828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.375055 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-plugins-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.377294 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrnq\" (UniqueName: \"kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378297 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-socket-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378323 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-csi-data-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378342 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-node-bootstrap-token\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378359 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378398 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378415 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379846 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79daa322-67c6-43f1-920a-3aafd45b8b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379867 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-certs\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379885 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379902 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-mountpoint-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379962 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4mx\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-kube-api-access-zf4mx\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379997 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-serving-cert\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380013 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e851e60-0051-41ea-8a88-b16366bac737-config\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380032 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvn7z\" (UniqueName: \"kubernetes.io/projected/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-kube-api-access-mvn7z\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380092 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e851e60-0051-41ea-8a88-b16366bac737-serving-cert\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380109 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7d25d6-390f-45d1-ab3e-af28799a9a70-config-volume\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380130 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-registration-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380146 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbj97\" (UniqueName: \"kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380208 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b82wv\" (UniqueName: \"kubernetes.io/projected/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-kube-api-access-b82wv\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380233 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380248 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqd5h\" (UniqueName: \"kubernetes.io/projected/7993ae76-3a13-4a6f-963b-8c9d854b48ec-kube-api-access-zqd5h\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380293 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcrh\" (UniqueName: \"kubernetes.io/projected/3e851e60-0051-41ea-8a88-b16366bac737-kube-api-access-9bcrh\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380310 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79daa322-67c6-43f1-920a-3aafd45b8b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380384 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw7cc\" (UniqueName: \"kubernetes.io/projected/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-kube-api-access-tw7cc\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380433 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f71e3ecb-16f2-4a94-b392-36d84d69b692-cert\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380448 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfd9\" (UniqueName: \"kubernetes.io/projected/ed7d25d6-390f-45d1-ab3e-af28799a9a70-kube-api-access-vsfd9\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.380477 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5l2\" (UniqueName: \"kubernetes.io/projected/716c8461-03fe-49c8-b3de-a254285cdd7d-kube-api-access-9g5l2\") pod \"migrator-59844c95c7-8xtx5\" (UID: \"716c8461-03fe-49c8-b3de-a254285cdd7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.380885 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:31.880873571 +0000 UTC m=+140.529545390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.378743 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-service-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.381835 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.382701 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79daa322-67c6-43f1-920a-3aafd45b8b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.382774 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-mountpoint-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.382942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed7d25d6-390f-45d1-ab3e-af28799a9a70-metrics-tls\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379466 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-config\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379088 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-socket-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.388314 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e851e60-0051-41ea-8a88-b16366bac737-config\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.390081 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.390105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-certs\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.379524 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-csi-data-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.390364 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/215ad331-8016-474c-a940-47d0619b69cb-etcd-ca\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.390666 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.390735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-registration-dir\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.391068 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7d25d6-390f-45d1-ab3e-af28799a9a70-config-volume\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.393253 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e851e60-0051-41ea-8a88-b16366bac737-serving-cert\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.393343 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-serving-cert\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.393482 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.400227 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7993ae76-3a13-4a6f-963b-8c9d854b48ec-node-bootstrap-token\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.400567 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/215ad331-8016-474c-a940-47d0619b69cb-etcd-client\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.400564 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f71e3ecb-16f2-4a94-b392-36d84d69b692-cert\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.402328 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79daa322-67c6-43f1-920a-3aafd45b8b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.403324 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.406375 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stl6s\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.408308 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2vwz5"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.411015 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6fm\" (UniqueName: \"kubernetes.io/projected/abf06eac-0589-4c69-9244-c9b5b35e0356-kube-api-access-gm6fm\") pod \"multus-admission-controller-857f4d67dd-db426\" (UID: \"abf06eac-0589-4c69-9244-c9b5b35e0356\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.432550 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82cf93d4-596a-45aa-80e6-3cc69672a99f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrgnr\" (UID: \"82cf93d4-596a-45aa-80e6-3cc69672a99f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.481060 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5g848"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.481303 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rtv5j"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.481979 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.482500 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbvp\" (UniqueName: \"kubernetes.io/projected/f390725c-ffef-45f6-bbc2-0145b811f4d5-kube-api-access-vsbvp\") pod \"console-operator-58897d9998-mvwnc\" (UID: \"f390725c-ffef-45f6-bbc2-0145b811f4d5\") " pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.483605 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:31.983583506 +0000 UTC m=+140.632255325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.510938 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2zlf\" (UniqueName: \"kubernetes.io/projected/8a1d3fee-212c-4628-b549-1c4d3e4cd0a2-kube-api-access-z2zlf\") pod \"openshift-config-operator-7777fb866f-st6nr\" (UID: \"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.522470 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgv9x"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.528129 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.530901 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.533298 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.543237 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.545296 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ssf\" (UniqueName: \"kubernetes.io/projected/a369544e-862d-41b6-928a-f1295ad7e93c-kube-api-access-86ssf\") pod \"packageserver-d55dfcdfc-cxjsh\" (UID: \"a369544e-862d-41b6-928a-f1295ad7e93c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.546753 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.563604 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdd9s\" (UniqueName: \"kubernetes.io/projected/f71e3ecb-16f2-4a94-b392-36d84d69b692-kube-api-access-rdd9s\") pod \"ingress-canary-pwkmz\" (UID: \"f71e3ecb-16f2-4a94-b392-36d84d69b692\") " pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.579531 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.583554 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.584006 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.08396636 +0000 UTC m=+140.732638179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.592315 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.601284 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5l2\" (UniqueName: \"kubernetes.io/projected/716c8461-03fe-49c8-b3de-a254285cdd7d-kube-api-access-9g5l2\") pod \"migrator-59844c95c7-8xtx5\" (UID: \"716c8461-03fe-49c8-b3de-a254285cdd7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.607545 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.608174 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.608984 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5\") pod \"oauth-openshift-558db77b4-rkgr6\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.612489 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.618600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.621229 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsd8c\" (UniqueName: \"kubernetes.io/projected/b20ab25d-a227-4eb6-ad5a-ab9ff491b751-kube-api-access-lsd8c\") pod \"service-ca-9c57cc56f-ztlnq\" (UID: \"b20ab25d-a227-4eb6-ad5a-ab9ff491b751\") " pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.622838 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" event={"ID":"7a9fd91f-a5e7-491f-9e75-1766cefac723","Type":"ContainerStarted","Data":"5d8a8b8d0043ff4565021b3680da2d762dd88c5a96793444b4bae1618567e6fb"} Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.627531 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.630881 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrnq\" (UniqueName: \"kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq\") pod \"marketplace-operator-79b997595-dsztb\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.642622 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-slntw"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.652725 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.654041 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvn7z\" (UniqueName: \"kubernetes.io/projected/00acaa94-9dfe-4d0f-9ea2-17870a8c1af5-kube-api-access-mvn7z\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4rf7\" (UID: \"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.671383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4mx\" (UniqueName: \"kubernetes.io/projected/79daa322-67c6-43f1-920a-3aafd45b8b75-kube-api-access-zf4mx\") pod \"cluster-image-registry-operator-dc59b4c8b-5lqwh\" (UID: \"79daa322-67c6-43f1-920a-3aafd45b8b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.680542 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcrh\" (UniqueName: \"kubernetes.io/projected/3e851e60-0051-41ea-8a88-b16366bac737-kube-api-access-9bcrh\") pod \"service-ca-operator-777779d784-vbcq5\" (UID: \"3e851e60-0051-41ea-8a88-b16366bac737\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.683296 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.688626 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.688735 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.188713499 +0000 UTC m=+140.837385318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.688886 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.689121 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.18911065 +0000 UTC m=+140.837782469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.690722 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.708780 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbj97\" (UniqueName: \"kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97\") pod \"collect-profiles-29490300-vd2hb\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.711199 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.721166 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b82wv\" (UniqueName: \"kubernetes.io/projected/4026cd0f-c59b-4d79-be00-89bf2fd4ba84-kube-api-access-b82wv\") pod \"package-server-manager-789f6589d5-sdrc6\" (UID: \"4026cd0f-c59b-4d79-be00-89bf2fd4ba84\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:31 crc kubenswrapper[4827]: W0126 09:08:31.725258 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb55756cc_0888_4a99_bfdf_6f4a7eafa65d.slice/crio-9e9af81d290b18d83262612a914f4ca3916d228b4dfc1907cfd751fd031e53ea WatchSource:0}: Error finding container 9e9af81d290b18d83262612a914f4ca3916d228b4dfc1907cfd751fd031e53ea: Status 404 returned error can't find the container with id 9e9af81d290b18d83262612a914f4ca3916d228b4dfc1907cfd751fd031e53ea Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.725376 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.726178 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.730117 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j"] Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.731117 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.738226 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.741989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqd5h\" (UniqueName: \"kubernetes.io/projected/7993ae76-3a13-4a6f-963b-8c9d854b48ec-kube-api-access-zqd5h\") pod \"machine-config-server-q57l9\" (UID: \"7993ae76-3a13-4a6f-963b-8c9d854b48ec\") " pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.744120 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.752169 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.761161 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfd9\" (UniqueName: \"kubernetes.io/projected/ed7d25d6-390f-45d1-ab3e-af28799a9a70-kube-api-access-vsfd9\") pod \"dns-default-6rb7x\" (UID: \"ed7d25d6-390f-45d1-ab3e-af28799a9a70\") " pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.767279 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pwkmz" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.785771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw7cc\" (UniqueName: \"kubernetes.io/projected/c41f5446-b61b-4e0c-bf6a-373f4df1b8ef-kube-api-access-tw7cc\") pod \"csi-hostpathplugin-8bdhj\" (UID: \"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef\") " pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.789998 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.790743 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.290724886 +0000 UTC m=+140.939396705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.794107 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.800123 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q57l9" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.803402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgqv\" (UniqueName: \"kubernetes.io/projected/215ad331-8016-474c-a940-47d0619b69cb-kube-api-access-wsgqv\") pod \"etcd-operator-b45778765-skgbv\" (UID: \"215ad331-8016-474c-a940-47d0619b69cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.893561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:31 crc kubenswrapper[4827]: E0126 09:08:31.893888 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.393876633 +0000 UTC m=+141.042548452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:31 crc kubenswrapper[4827]: I0126 09:08:31.912907 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:31.995744 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:31.996573 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.496468936 +0000 UTC m=+141.145140755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.014011 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.018172 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.062043 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.083359 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.099928 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.100508 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.600496267 +0000 UTC m=+141.249168076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.119738 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.201338 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.201746 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.701726003 +0000 UTC m=+141.350397822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.269877 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mvwnc"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.302610 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.302976 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.802958799 +0000 UTC m=+141.451630618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.353059 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5"] Jan 26 09:08:32 crc kubenswrapper[4827]: W0126 09:08:32.383246 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf390725c_ffef_45f6_bbc2_0145b811f4d5.slice/crio-55e3b335200f4320bd1ff90b4623231d9c963e3548ac30487e170acf2aecb37a WatchSource:0}: Error finding container 55e3b335200f4320bd1ff90b4623231d9c963e3548ac30487e170acf2aecb37a: Status 404 returned error can't find the container with id 55e3b335200f4320bd1ff90b4623231d9c963e3548ac30487e170acf2aecb37a Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.406768 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.407224 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:32.907204606 +0000 UTC m=+141.555876425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.510837 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.515270 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.015253962 +0000 UTC m=+141.663925781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.529799 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.602005 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-st6nr"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.632631 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.633024 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.133005257 +0000 UTC m=+141.781677076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.671483 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.728366 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.740695 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.741066 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.241051814 +0000 UTC m=+141.889723633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.743625 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.812071 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh"] Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.844436 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.844978 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.344962242 +0000 UTC m=+141.993634061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:32 crc kubenswrapper[4827]: I0126 09:08:32.948446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:32 crc kubenswrapper[4827]: E0126 09:08:32.948846 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.448828458 +0000 UTC m=+142.097500277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.005821 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db426"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.029108 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" event={"ID":"b55756cc-0888-4a99-bfdf-6f4a7eafa65d","Type":"ContainerStarted","Data":"9e9af81d290b18d83262612a914f4ca3916d228b4dfc1907cfd751fd031e53ea"} Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.048375 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45cfc4e3_d32b_4e71_8038_89a9350cb87b.slice/crio-83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.057481 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.058066 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.558046026 +0000 UTC m=+142.206717845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.099427 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.107866 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.607849037 +0000 UTC m=+142.256520846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.132417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" event={"ID":"b8af31d9-704f-494e-be0e-df5743e8c0c0","Type":"ContainerStarted","Data":"08b4430cd18af96f00b1198eacd279a0829aff86bf0513b39bb5d5efe7a563f0"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.180769 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.199384 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" event={"ID":"463a0b0e-04a4-4bc1-b865-46613288436b","Type":"ContainerStarted","Data":"807344766a26a518532ec1c2cd18d1d85eed801c758a5e27f410811a90af2a5c"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.208380 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.208940 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.70891612 +0000 UTC m=+142.357587969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.256851 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" event={"ID":"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01","Type":"ContainerStarted","Data":"a93befa2246f13304109175eee80ac90168c4c1641691edce36675aca8246c25"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.282695 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" event={"ID":"2c409bff-4b8d-4296-91a4-5436aadab19b","Type":"ContainerStarted","Data":"d9f21487a9c6398841d2fe54e50cd3866deef526dd8a2853be2c6185c1e8a937"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.282730 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" event={"ID":"2c409bff-4b8d-4296-91a4-5436aadab19b","Type":"ContainerStarted","Data":"624bb1ff70a2ae62f497bd4f5991b617b8f4a3316dd31d227ad4463e6eba3369"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.291116 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" event={"ID":"1001135b-5055-4366-a41e-84019fd4666b","Type":"ContainerStarted","Data":"04a239e1bfde779d8e1f8ac5fd760d19debb40a166e267fa541886a27e21d786"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.312274 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.312590 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.812578341 +0000 UTC m=+142.461250160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.328950 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" event={"ID":"500979f1-7a4a-4d40-8391-6df8d92f803a","Type":"ContainerStarted","Data":"c4a8ee7850a9a801bb2c52ab9b5a77f91214cd1ebdd15fd3d9680c4aec116641"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.328986 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" event={"ID":"500979f1-7a4a-4d40-8391-6df8d92f803a","Type":"ContainerStarted","Data":"0551c8912926a94d96942e6597b8470f84af2aa2147e0940b3b75bb5b011af4d"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.332050 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" event={"ID":"eb303a5e-d8a9-45be-8984-534092b4c2b7","Type":"ContainerStarted","Data":"5d6bb37e75525087fde521796c7d7b80e6a236593c5770fd2daddef20f1cb2cc"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.332965 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" event={"ID":"5e95674a-44b2-42a1-95fd-af905608305b","Type":"ContainerStarted","Data":"9a280c1e75bdf81e31ff1437c04c9cf405263cd483ed73a077bad2faceb9da4d"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.335074 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" event={"ID":"1282b8d2-fb85-4aa6-adf8-658f0fa77dee","Type":"ContainerStarted","Data":"cd03d8087fd77143ef92f89d7d913896278babbfee98cca6aae160e2085c4ce5"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.344111 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" event={"ID":"7a9fd91f-a5e7-491f-9e75-1766cefac723","Type":"ContainerStarted","Data":"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.344936 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.352869 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.355018 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" event={"ID":"a2c8d789-5fe6-4f51-b6b7-7a986933867d","Type":"ContainerStarted","Data":"17e1dbb9358de5e292c99da02a41048cbe70866feca467f9e9e9024e8fccefd4"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.361564 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-slntw" event={"ID":"53459cff-b8c1-495b-8d5e-49d54a77fb30","Type":"ContainerStarted","Data":"52c23202f07113fb8e6ca343c020be6f41053a94ebce463e3743c91a4bdb67b9"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.363912 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.392129 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.417318 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.418700 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:33.918684396 +0000 UTC m=+142.567356215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: W0126 09:08:33.441358 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode022fa35_5182_4d6b_b6f3_e05822ac8ee9.slice/crio-0f373ed3e6d411a8a0f32b998c69dc9b1bc117056d9a290e02f50851f010da95 WatchSource:0}: Error finding container 0f373ed3e6d411a8a0f32b998c69dc9b1bc117056d9a290e02f50851f010da95: Status 404 returned error can't find the container with id 0f373ed3e6d411a8a0f32b998c69dc9b1bc117056d9a290e02f50851f010da95 Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.441786 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" event={"ID":"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2","Type":"ContainerStarted","Data":"00d543c0ba49e4e2c3a4f8327a8bbb0203257f225e3bec72bffee7990993dc23"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.459305 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tfwt" podStartSLOduration=117.459289113 podStartE2EDuration="1m57.459289113s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:33.449940025 +0000 UTC m=+142.098611844" watchObservedRunningTime="2026-01-26 09:08:33.459289113 +0000 UTC m=+142.107960932" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.460798 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" podStartSLOduration=117.460789153 podStartE2EDuration="1m57.460789153s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:33.412624395 +0000 UTC m=+142.061296234" watchObservedRunningTime="2026-01-26 09:08:33.460789153 +0000 UTC m=+142.109460972" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.471218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" event={"ID":"f390725c-ffef-45f6-bbc2-0145b811f4d5","Type":"ContainerStarted","Data":"55e3b335200f4320bd1ff90b4623231d9c963e3548ac30487e170acf2aecb37a"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.496022 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.503088 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ztlnq"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.504656 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm24n" podStartSLOduration=117.504627066 podStartE2EDuration="1m57.504627066s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:33.495795992 +0000 UTC m=+142.144467811" watchObservedRunningTime="2026-01-26 09:08:33.504627066 +0000 UTC m=+142.153298885" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.508522 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5724v" event={"ID":"c082a1b4-a8cb-4bd5-9034-1678368030c0","Type":"ContainerStarted","Data":"a15f34aff8eee53a0e9e74100a99960f8d11cdb8bed0ec152fd9eaf674885064"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.519086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.520342 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.020330383 +0000 UTC m=+142.669002192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.530883 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" podStartSLOduration=116.530868483 podStartE2EDuration="1m56.530868483s" podCreationTimestamp="2026-01-26 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:33.528136131 +0000 UTC m=+142.176807950" watchObservedRunningTime="2026-01-26 09:08:33.530868483 +0000 UTC m=+142.179540292" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.552970 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" event={"ID":"ec85065e-6410-43f8-9b49-bc0d1956b92d","Type":"ContainerStarted","Data":"e12ba46705412a34c9fae0102924a2e8460ba5e375eb9e43d722adaae7f850c4"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.568335 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" event={"ID":"00f5a10b-1353-4060-a2b0-7cc7d9980817","Type":"ContainerStarted","Data":"5c13cf0a88f8550242f31b0903bd2bf668e990b6da5fcbdda17357ebfab5ea72"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.576473 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cnfxn" event={"ID":"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a","Type":"ContainerStarted","Data":"8b62256b1d41a9ccb1a3425c852ba6cc81a17dfc5dd065bfa1d94edf1be6f957"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.585400 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" event={"ID":"45cfc4e3-d32b-4e71-8038-89a9350cb87b","Type":"ContainerStarted","Data":"c09b8a59c4d02fcf70adbf5ac4603b2b6c971f1a553c8dad198180a9c0e7468a"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.587835 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2vwz5" event={"ID":"d4f90fc1-5287-4e23-9f4a-4e194db3610b","Type":"ContainerStarted","Data":"421f28937c3b4835d1d284c76834326681699fe4adcfcaea1f70b3e8b90f52a2"} Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.621234 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.621414 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.121391695 +0000 UTC m=+142.770063524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.622293 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.622590 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.122582846 +0000 UTC m=+142.771254665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.694944 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5724v" podStartSLOduration=117.694926836 podStartE2EDuration="1m57.694926836s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:33.557555621 +0000 UTC m=+142.206227440" watchObservedRunningTime="2026-01-26 09:08:33.694926836 +0000 UTC m=+142.343598655" Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.696816 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6rb7x"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.725208 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.725618 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.22560334 +0000 UTC m=+142.874275159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.737483 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-skgbv"] Jan 26 09:08:33 crc kubenswrapper[4827]: W0126 09:08:33.752181 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb20ab25d_a227_4eb6_ad5a_ab9ff491b751.slice/crio-18be5c8cfd0b968be9fcc4df880a611dca2736627c4815dca9aa205aee0f3b7c WatchSource:0}: Error finding container 18be5c8cfd0b968be9fcc4df880a611dca2736627c4815dca9aa205aee0f3b7c: Status 404 returned error can't find the container with id 18be5c8cfd0b968be9fcc4df880a611dca2736627c4815dca9aa205aee0f3b7c Jan 26 09:08:33 crc kubenswrapper[4827]: W0126 09:08:33.753580 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00acaa94_9dfe_4d0f_9ea2_17870a8c1af5.slice/crio-ee732ccd615b95c3d482a6e0aaa62f8afb471b16a1b06ab83269c3499cbbb1ca WatchSource:0}: Error finding container ee732ccd615b95c3d482a6e0aaa62f8afb471b16a1b06ab83269c3499cbbb1ca: Status 404 returned error can't find the container with id ee732ccd615b95c3d482a6e0aaa62f8afb471b16a1b06ab83269c3499cbbb1ca Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.804292 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pwkmz"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.812130 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.830033 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.830519 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.330507644 +0000 UTC m=+142.979179463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:33 crc kubenswrapper[4827]: I0126 09:08:33.931138 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:33 crc kubenswrapper[4827]: E0126 09:08:33.931450 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.431433542 +0000 UTC m=+143.080105361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.032167 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.032786 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.033132 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.53312049 +0000 UTC m=+143.181792309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.041086 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:34 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:34 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:34 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.041131 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:34 crc kubenswrapper[4827]: W0126 09:08:34.054499 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf71e3ecb_16f2_4a94_b392_36d84d69b692.slice/crio-3adb41e55b65ca3d866c8fc2670fcbe3f1b40454909e7d07823d2e1a417023d9 WatchSource:0}: Error finding container 3adb41e55b65ca3d866c8fc2670fcbe3f1b40454909e7d07823d2e1a417023d9: Status 404 returned error can't find the container with id 3adb41e55b65ca3d866c8fc2670fcbe3f1b40454909e7d07823d2e1a417023d9 Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.118533 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6"] Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.132503 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8bdhj"] Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.135564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.135985 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.63597063 +0000 UTC m=+143.284642449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.239104 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.239465 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.739452376 +0000 UTC m=+143.388124195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.342519 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.342837 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.842820628 +0000 UTC m=+143.491492447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.443479 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.444000 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:34.943989214 +0000 UTC m=+143.592661033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.545098 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.545440 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.045422186 +0000 UTC m=+143.694094005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.615308 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" event={"ID":"45cfc4e3-d32b-4e71-8038-89a9350cb87b","Type":"ContainerStarted","Data":"83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.616307 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.627007 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" event={"ID":"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a","Type":"ContainerStarted","Data":"17e0de10e141340322c10470c460107017d8bc52d9a1223a6ecbfafd6da391c8"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.637179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" event={"ID":"b8af31d9-704f-494e-be0e-df5743e8c0c0","Type":"ContainerStarted","Data":"c963e5cd48fc95a044ee6a73e11ec42bcbc510d63eb5401e1e86616d6908f74d"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.638671 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" podStartSLOduration=118.638650669 podStartE2EDuration="1m58.638650669s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.637449628 +0000 UTC m=+143.286121447" watchObservedRunningTime="2026-01-26 09:08:34.638650669 +0000 UTC m=+143.287322478" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.646283 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.646551 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.146539959 +0000 UTC m=+143.795211778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.646940 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" event={"ID":"3d1327f0-1810-452b-a195-b40a94c96326","Type":"ContainerStarted","Data":"4f5dc32ea2c857e66ebcba3e2506093defe2d09b1073d3eb2fc87d66f6070618"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.655980 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" event={"ID":"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5","Type":"ContainerStarted","Data":"ee732ccd615b95c3d482a6e0aaa62f8afb471b16a1b06ab83269c3499cbbb1ca"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.666333 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.670664 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" event={"ID":"5e95674a-44b2-42a1-95fd-af905608305b","Type":"ContainerStarted","Data":"5c37f9a2159f126724e5e3e9a0af0220b28cf5ab79cbd3a4d0748229dfa5e389"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.671045 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.682877 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" event={"ID":"7c7e8e22-f14f-47c2-b3a5-6f24a7ffcbf2","Type":"ContainerStarted","Data":"83537a815ca4bd9f0d9f3e365473f13c4f6d49ea722e1edd09db1c327a3132c5"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.696251 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.699775 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" event={"ID":"82cf93d4-596a-45aa-80e6-3cc69672a99f","Type":"ContainerStarted","Data":"5051ee44d07cd6219e756646819c86b402236c036bd1c0118319d66565540e42"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.699805 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" event={"ID":"82cf93d4-596a-45aa-80e6-3cc69672a99f","Type":"ContainerStarted","Data":"f5cf390acdd7009d99a71154dbf8fda30a94dab4b2c0484d37c90cfd8ecf5696"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.711851 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" event={"ID":"716c8461-03fe-49c8-b3de-a254285cdd7d","Type":"ContainerStarted","Data":"ff44a642b0b6594ab5c03bafeecde368019ec47ed12f01c37833e669e071b8fd"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.711894 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" event={"ID":"716c8461-03fe-49c8-b3de-a254285cdd7d","Type":"ContainerStarted","Data":"760e854dbcb81d35914facd3a15394fc8efb7f089586255771510a9fb240d629"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.724542 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" event={"ID":"b55756cc-0888-4a99-bfdf-6f4a7eafa65d","Type":"ContainerStarted","Data":"739c9347b4902e15102e198a12272eef759cea9f3aa8a847341690deda9637d2"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.740602 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lzr6j" podStartSLOduration=118.740584404 podStartE2EDuration="1m58.740584404s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.739799633 +0000 UTC m=+143.388471462" watchObservedRunningTime="2026-01-26 09:08:34.740584404 +0000 UTC m=+143.389256213" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.742436 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" event={"ID":"3e851e60-0051-41ea-8a88-b16366bac737","Type":"ContainerStarted","Data":"8654bf80bf598b93813bbcb18918561857870d0ab8a2060ebdb5a26ac4985e50"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.746942 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.747047 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.247022785 +0000 UTC m=+143.895694604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.747489 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.752829 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.252807819 +0000 UTC m=+143.901479638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.764880 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" event={"ID":"ec85065e-6410-43f8-9b49-bc0d1956b92d","Type":"ContainerStarted","Data":"9b6761ec22e1f01d29fcc9d37f7ccc9c1637fb80a6a3fc66ab375235434f3703"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.796959 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrgnr" podStartSLOduration=118.79694398 podStartE2EDuration="1m58.79694398s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.795500071 +0000 UTC m=+143.444171891" watchObservedRunningTime="2026-01-26 09:08:34.79694398 +0000 UTC m=+143.445615799" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.810015 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" event={"ID":"abf06eac-0589-4c69-9244-c9b5b35e0356","Type":"ContainerStarted","Data":"e4e561788b80906f48fa53e99d5974ba4d7ef3778097fa51599644d40b850ad2"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.816856 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" event={"ID":"215ad331-8016-474c-a940-47d0619b69cb","Type":"ContainerStarted","Data":"600ad668da42dd8542e2ceeca21971dcd679666fb79d3f6b4f7e2ad6db6047ae"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.848691 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.849728 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.3497126 +0000 UTC m=+143.998384419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.865874 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6vpjj" podStartSLOduration=118.865858108 podStartE2EDuration="1m58.865858108s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.865023667 +0000 UTC m=+143.513695476" watchObservedRunningTime="2026-01-26 09:08:34.865858108 +0000 UTC m=+143.514529927" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.895283 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" event={"ID":"eb303a5e-d8a9-45be-8984-534092b4c2b7","Type":"ContainerStarted","Data":"1b25779ce44dc29efea98fbd785c9feae16ac5a9b53de5f3b0c9a873f076701c"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.905432 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2vwz5" event={"ID":"d4f90fc1-5287-4e23-9f4a-4e194db3610b","Type":"ContainerStarted","Data":"c345f9fed4043bf72ec67db3d26ef27f42955a748c33838f92489ad84f322fd6"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.906608 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.908393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" event={"ID":"a369544e-862d-41b6-928a-f1295ad7e93c","Type":"ContainerStarted","Data":"1cd216098f4f02e8884925239d473f8ed5a92c7c8123e1bf3916ac58511429e5"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.910659 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" event={"ID":"4026cd0f-c59b-4d79-be00-89bf2fd4ba84","Type":"ContainerStarted","Data":"c41220c6068f11d4f489d1ffaf7a6f228144723c8130e1c47f21bf589e02233e"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.911004 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6z7p" podStartSLOduration=118.910993676 podStartE2EDuration="1m58.910993676s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.909316482 +0000 UTC m=+143.557988311" watchObservedRunningTime="2026-01-26 09:08:34.910993676 +0000 UTC m=+143.559665485" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.921229 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-2vwz5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.921466 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2vwz5" podUID="d4f90fc1-5287-4e23-9f4a-4e194db3610b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.925965 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" event={"ID":"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2","Type":"ContainerStarted","Data":"f95d3e3368e932c5e7c3f68279e4e9bc3ddcae1b6d684043beeaff19ce1f01fc"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.930351 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" event={"ID":"00f5a10b-1353-4060-a2b0-7cc7d9980817","Type":"ContainerStarted","Data":"a9c21ebf475dbc4d7d75164a5e33339d69cb461986d019ce2a33ade2253a2c98"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.958442 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:34 crc kubenswrapper[4827]: E0126 09:08:34.958811 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.458799095 +0000 UTC m=+144.107470914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.958986 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2vwz5" podStartSLOduration=118.958970599 podStartE2EDuration="1m58.958970599s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:34.957968673 +0000 UTC m=+143.606640512" watchObservedRunningTime="2026-01-26 09:08:34.958970599 +0000 UTC m=+143.607642418" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.989064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" event={"ID":"f390725c-ffef-45f6-bbc2-0145b811f4d5","Type":"ContainerStarted","Data":"ad85afd2031d8f203b35a5c95e806454d71fbd21efe3c8bda6f6cae03f8f84c9"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.989987 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.994006 4827 patch_prober.go:28] interesting pod/console-operator-58897d9998-mvwnc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.994061 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" podUID="f390725c-ffef-45f6-bbc2-0145b811f4d5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.995130 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" event={"ID":"1001135b-5055-4366-a41e-84019fd4666b","Type":"ContainerStarted","Data":"3f373d337766710dbdaf790cd9facf71bab76421c5d621d0b46ceec4864c8bdd"} Jan 26 09:08:34 crc kubenswrapper[4827]: I0126 09:08:34.999411 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" event={"ID":"1282b8d2-fb85-4aa6-adf8-658f0fa77dee","Type":"ContainerStarted","Data":"cf3f94d1331cfd43b16352f2b4c10ff8958bb5a79262bd8f18d8cdae7bea996e"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.021081 4827 generic.go:334] "Generic (PLEG): container finished" podID="53459cff-b8c1-495b-8d5e-49d54a77fb30" containerID="237480dc1ec2f90725b5a01b2703f6ec56c8da11d898caf730a559d37c233b9e" exitCode=0 Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.021172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-slntw" event={"ID":"53459cff-b8c1-495b-8d5e-49d54a77fb30","Type":"ContainerDied","Data":"237480dc1ec2f90725b5a01b2703f6ec56c8da11d898caf730a559d37c233b9e"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.045760 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:35 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:35 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:35 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.045783 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" event={"ID":"d496a86c-e689-4511-8e17-bf8a246668e5","Type":"ContainerStarted","Data":"909db808047cf62fe976f848b4a62a55b982d601ef1a1f61b2820144aaddc3af"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.045795 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.065088 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.065686 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.565665231 +0000 UTC m=+144.214337050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.073905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" event={"ID":"463a0b0e-04a4-4bc1-b865-46613288436b","Type":"ContainerStarted","Data":"820c0f73bdb9b6df588713ca7a0a89abf357c2a782465baccda18736b4d54f69"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.109292 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" event={"ID":"79daa322-67c6-43f1-920a-3aafd45b8b75","Type":"ContainerStarted","Data":"70ac81199feacd429641083b278d9771e598c557d04b190d6e3b1f1f25fc50f7"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.134830 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerStarted","Data":"0f373ed3e6d411a8a0f32b998c69dc9b1bc117056d9a290e02f50851f010da95"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.167018 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.171490 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.671469478 +0000 UTC m=+144.320141297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.246134 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" podStartSLOduration=119.246115409 podStartE2EDuration="1m59.246115409s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:35.064085429 +0000 UTC m=+143.712757248" watchObservedRunningTime="2026-01-26 09:08:35.246115409 +0000 UTC m=+143.894787228" Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.268134 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.268296 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.768266347 +0000 UTC m=+144.416938166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.268541 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.269191 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.769181061 +0000 UTC m=+144.417852960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.301034 4827 generic.go:334] "Generic (PLEG): container finished" podID="8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01" containerID="b74ca34434a14a6a0e18a1444666d63856e7042af19933d33a400f97a01924ab" exitCode=0 Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.301119 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" event={"ID":"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01","Type":"ContainerDied","Data":"b74ca34434a14a6a0e18a1444666d63856e7042af19933d33a400f97a01924ab"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.350913 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgv9x" event={"ID":"a2c8d789-5fe6-4f51-b6b7-7a986933867d","Type":"ContainerStarted","Data":"fd621ad79dd66e3a4e5415f7f611af8ab78f882a7905d6875301a70c6ed84764"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.352750 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q57l9" event={"ID":"7993ae76-3a13-4a6f-963b-8c9d854b48ec","Type":"ContainerStarted","Data":"701b2c676031031812dd2e082bd2d679ee9c54783c3bf525852ffefa2e7f616d"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.352775 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q57l9" event={"ID":"7993ae76-3a13-4a6f-963b-8c9d854b48ec","Type":"ContainerStarted","Data":"d52437f271e180c77ceb200e719c8f557052d9ede81798b075acfa6065ca261d"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.364629 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pwkmz" event={"ID":"f71e3ecb-16f2-4a94-b392-36d84d69b692","Type":"ContainerStarted","Data":"3adb41e55b65ca3d866c8fc2670fcbe3f1b40454909e7d07823d2e1a417023d9"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.365551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6rb7x" event={"ID":"ed7d25d6-390f-45d1-ab3e-af28799a9a70","Type":"ContainerStarted","Data":"8071d40a2af838a0f8c5af107f1df97dbfdcd8328650c0bcee4c8770d92a1b5d"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.369610 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.370173 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.87015428 +0000 UTC m=+144.518826099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.376004 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" event={"ID":"b20ab25d-a227-4eb6-ad5a-ab9ff491b751","Type":"ContainerStarted","Data":"18be5c8cfd0b968be9fcc4df880a611dca2736627c4815dca9aa205aee0f3b7c"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.386900 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" event={"ID":"1918f42b-ecd2-4800-85ab-fbc705acccd7","Type":"ContainerStarted","Data":"5bf7eacb45b690e67abd339fc35491504291a4db8ec33b703dab64beaa3389bf"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.389167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" event={"ID":"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef","Type":"ContainerStarted","Data":"869e988ee7e97ede144a3d6fdd04739999d7f022bcfabc2d98f327c11ba088d2"} Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.405467 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" podStartSLOduration=119.405448127 podStartE2EDuration="1m59.405448127s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:35.402692134 +0000 UTC m=+144.051363953" watchObservedRunningTime="2026-01-26 09:08:35.405448127 +0000 UTC m=+144.054119946" Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.473674 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.475945 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:35.975931107 +0000 UTC m=+144.624602926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.512512 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2m7j" podStartSLOduration=119.512498158 podStartE2EDuration="1m59.512498158s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:35.510998267 +0000 UTC m=+144.159670086" watchObservedRunningTime="2026-01-26 09:08:35.512498158 +0000 UTC m=+144.161169987" Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.574998 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.575324 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.075310614 +0000 UTC m=+144.723982433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.676998 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.677708 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.177690491 +0000 UTC m=+144.826362310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.784130 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-q57l9" podStartSLOduration=7.784115125 podStartE2EDuration="7.784115125s" podCreationTimestamp="2026-01-26 09:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:35.677341132 +0000 UTC m=+144.326012951" watchObservedRunningTime="2026-01-26 09:08:35.784115125 +0000 UTC m=+144.432786944" Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.788087 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.788408 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.288392719 +0000 UTC m=+144.937064528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.890002 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.890344 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.390330114 +0000 UTC m=+145.039001933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.990907 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.991090 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.491060147 +0000 UTC m=+145.139731976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:35 crc kubenswrapper[4827]: I0126 09:08:35.991370 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:35 crc kubenswrapper[4827]: E0126 09:08:35.991664 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.491652552 +0000 UTC m=+145.140324371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.038002 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:36 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:36 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:36 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.038060 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.057031 4827 csr.go:261] certificate signing request csr-z6swc is approved, waiting to be issued Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.065624 4827 csr.go:257] certificate signing request csr-z6swc is issued Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.094427 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.094907 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.594889942 +0000 UTC m=+145.243561761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.225728 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.226181 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.726153675 +0000 UTC m=+145.374825494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.338783 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.339095 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.839079422 +0000 UTC m=+145.487751241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.401565 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pwkmz" event={"ID":"f71e3ecb-16f2-4a94-b392-36d84d69b692","Type":"ContainerStarted","Data":"5193dc4f3e5a13e2cb834a665c389988080c0e3b2154e0527f1a28c02050c723"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.440761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.448315 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:36.94829129 +0000 UTC m=+145.596963109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.454327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6rb7x" event={"ID":"ed7d25d6-390f-45d1-ab3e-af28799a9a70","Type":"ContainerStarted","Data":"acf5da8eff4147a89629168f0cbf8be8a59064ac7369e5821a70454cff230ec0"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.462093 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" event={"ID":"00acaa94-9dfe-4d0f-9ea2-17870a8c1af5","Type":"ContainerStarted","Data":"eff444550a90f3dca0b8265f3df2a9bc1ece0729b1981c38de23420e5e0be043"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.486718 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pwkmz" podStartSLOduration=8.486698259 podStartE2EDuration="8.486698259s" podCreationTimestamp="2026-01-26 09:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.485100656 +0000 UTC m=+145.133772475" watchObservedRunningTime="2026-01-26 09:08:36.486698259 +0000 UTC m=+145.135370078" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.541896 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.542326 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.042304765 +0000 UTC m=+145.690976584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.542465 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.542764 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.042755677 +0000 UTC m=+145.691427486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.565027 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.565287 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4rf7" podStartSLOduration=120.565276544 podStartE2EDuration="2m0.565276544s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.560505257 +0000 UTC m=+145.209177066" watchObservedRunningTime="2026-01-26 09:08:36.565276544 +0000 UTC m=+145.213948363" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.565937 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.570847 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.603162 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.623156 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" event={"ID":"d496a86c-e689-4511-8e17-bf8a246668e5","Type":"ContainerStarted","Data":"9da2f412e72d1b96592288ec3e8d9d58406dbfa985528fdced1047de65d48912"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.623204 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.645368 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.645525 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.645570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mdh\" (UniqueName: \"kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.645661 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.645754 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.14574087 +0000 UTC m=+145.794412689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.672098 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" event={"ID":"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a","Type":"ContainerStarted","Data":"b7dea8cbea12b61836f220f41d0e2f3dfddbadc1b787a2b8503ca4eb2715bfb7"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.682960 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.692994 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" event={"ID":"abf06eac-0589-4c69-9244-c9b5b35e0356","Type":"ContainerStarted","Data":"ea2f8954c9b9a99bbcda01415337cb79dab6aafce02c0271f3c3762919b47e0d"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.715058 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" event={"ID":"1001135b-5055-4366-a41e-84019fd4666b","Type":"ContainerStarted","Data":"ba8840cfc683a213270d7cc86c5f394ec26d532b517217556ff2ca065992c8dc"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.733903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" event={"ID":"1282b8d2-fb85-4aa6-adf8-658f0fa77dee","Type":"ContainerStarted","Data":"8fee9bc51719b9c58b80f49e3328454806b614208aacbd17d9fc73f546b9906b"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.734321 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vts6f" podStartSLOduration=120.73430456 podStartE2EDuration="2m0.73430456s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.715073389 +0000 UTC m=+145.363745208" watchObservedRunningTime="2026-01-26 09:08:36.73430456 +0000 UTC m=+145.382976369" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.746667 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mdh\" (UniqueName: \"kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.747153 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.747320 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.747509 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.748047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.748792 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.248778593 +0000 UTC m=+145.897450422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.749113 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.765174 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" event={"ID":"a369544e-862d-41b6-928a-f1295ad7e93c","Type":"ContainerStarted","Data":"bc2f22939beec7a6cbb58f11b87d2981266e0e270b6715111b6b1e1eb06b5ca7"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.765724 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.767089 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.768041 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.774042 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" event={"ID":"215ad331-8016-474c-a940-47d0619b69cb","Type":"ContainerStarted","Data":"200515a105d84281521dd86fe193edd1dd14cf5923b5806b21c56a4b43169644"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.778117 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.784927 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerStarted","Data":"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.785682 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.803883 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsztb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.804227 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.817684 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" event={"ID":"ec85065e-6410-43f8-9b49-bc0d1956b92d","Type":"ContainerStarted","Data":"7b59f0aa64c1d0482bd5b8ad59304998d6f0a05afd09b25b707ef5e8b706f59e"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.821227 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mdh\" (UniqueName: \"kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh\") pod \"certified-operators-tl6c2\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.836894 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" event={"ID":"716c8461-03fe-49c8-b3de-a254285cdd7d","Type":"ContainerStarted","Data":"ee953e1c276fe6db22aeb29876b9cf02ab5940d09799dff73c22e3c4088c4a7d"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.851966 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.853154 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.353134573 +0000 UTC m=+146.001806392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.868938 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" event={"ID":"00f5a10b-1353-4060-a2b0-7cc7d9980817","Type":"ContainerStarted","Data":"13719f610f1191a3efe3e1ed6a97b05ef041caed34a0b66473a52c5026a36544"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.882862 4827 generic.go:334] "Generic (PLEG): container finished" podID="8a1d3fee-212c-4628-b549-1c4d3e4cd0a2" containerID="6490f34f63c2be3fc22adf35f3249982d537679b587a3bda4466d69252882428" exitCode=0 Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.882946 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" event={"ID":"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2","Type":"ContainerDied","Data":"6490f34f63c2be3fc22adf35f3249982d537679b587a3bda4466d69252882428"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.883670 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xwz57" podStartSLOduration=120.883631242 podStartE2EDuration="2m0.883631242s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.879271026 +0000 UTC m=+145.527942845" watchObservedRunningTime="2026-01-26 09:08:36.883631242 +0000 UTC m=+145.532303071" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.884318 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" podStartSLOduration=120.88430982 podStartE2EDuration="2m0.88430982s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.851578832 +0000 UTC m=+145.500250651" watchObservedRunningTime="2026-01-26 09:08:36.88430982 +0000 UTC m=+145.532981639" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.902473 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.917325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" event={"ID":"b8af31d9-704f-494e-be0e-df5743e8c0c0","Type":"ContainerStarted","Data":"849a646086f6b527febe4a146cfe4a8eddf819dcd7fcd1ae75b0f01005b5a543"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.917466 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.923259 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.926262 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" event={"ID":"b20ab25d-a227-4eb6-ad5a-ab9ff491b751","Type":"ContainerStarted","Data":"c4c310d93cdce0e2a44fd209de2b9ce144ed2861d96fff7a420c6e4637a8fb2a"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.944770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cnfxn" event={"ID":"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a","Type":"ContainerStarted","Data":"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.947950 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8xtx5" podStartSLOduration=120.947932208 podStartE2EDuration="2m0.947932208s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.945572616 +0000 UTC m=+145.594244435" watchObservedRunningTime="2026-01-26 09:08:36.947932208 +0000 UTC m=+145.596604027" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.956318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.961401 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" event={"ID":"eb303a5e-d8a9-45be-8984-534092b4c2b7","Type":"ContainerStarted","Data":"33775c55ee9a647108fd05225b5321db5b3d76b7f725248aa7abd940bf9abecd"} Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.977658 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.978721 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjj7\" (UniqueName: \"kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.978894 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.979011 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.993030 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2kjw" podStartSLOduration=120.993011855 podStartE2EDuration="2m0.993011855s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:36.978232062 +0000 UTC m=+145.626903881" watchObservedRunningTime="2026-01-26 09:08:36.993011855 +0000 UTC m=+145.641683674" Jan 26 09:08:36 crc kubenswrapper[4827]: E0126 09:08:36.982460 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.482446794 +0000 UTC m=+146.131118613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.993500 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:08:36 crc kubenswrapper[4827]: I0126 09:08:36.997198 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" event={"ID":"3e851e60-0051-41ea-8a88-b16366bac737","Type":"ContainerStarted","Data":"c27dc22dfc4b9ae68d3a1f155a6ca3e80304719c2530b11e15c29042af4aab44"} Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.011273 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lqwh" event={"ID":"79daa322-67c6-43f1-920a-3aafd45b8b75","Type":"ContainerStarted","Data":"b9c75fbd4ede15cfee56e76a236b8d924e962e01c85e2bcfc859b4eafcf5418b"} Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.013992 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" event={"ID":"4026cd0f-c59b-4d79-be00-89bf2fd4ba84","Type":"ContainerStarted","Data":"9b2799be298fa029b02ed0dab2f54e5d13ad25c38eb6b585143c6decf239e5af"} Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.015673 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-2vwz5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.015727 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2vwz5" podUID="d4f90fc1-5287-4e23-9f4a-4e194db3610b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.043932 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:37 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:37 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:37 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.044179 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.066585 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 09:03:36 +0000 UTC, rotation deadline is 2026-11-30 23:47:46.049329986 +0000 UTC Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.066809 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7406h39m8.982525283s for next certificate rotation Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.099327 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.099544 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrvn\" (UniqueName: \"kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.099843 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsjj7\" (UniqueName: \"kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.099922 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.099988 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.100181 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.100268 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.100968 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.600947819 +0000 UTC m=+146.249619638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.118973 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.120986 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.122951 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" podStartSLOduration=121.122940752 podStartE2EDuration="2m1.122940752s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.060875276 +0000 UTC m=+145.709547095" watchObservedRunningTime="2026-01-26 09:08:37.122940752 +0000 UTC m=+145.771612571" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.124249 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.137394 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.174391 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5g848" podStartSLOduration=121.17437326699999 podStartE2EDuration="2m1.174373267s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.134764946 +0000 UTC m=+145.783436765" watchObservedRunningTime="2026-01-26 09:08:37.174373267 +0000 UTC m=+145.823045086" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.174526 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.182547 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsjj7\" (UniqueName: \"kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7\") pod \"community-operators-rv4tx\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201099 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndtwx\" (UniqueName: \"kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201173 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201215 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201282 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201308 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201330 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hrvn\" (UniqueName: \"kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.201385 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.201767 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.701757404 +0000 UTC m=+146.350429223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.202178 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.202497 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.246312 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hrvn\" (UniqueName: \"kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn\") pod \"certified-operators-vx7kr\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.310684 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.311334 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.311423 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.811399943 +0000 UTC m=+146.460071802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.311515 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndtwx\" (UniqueName: \"kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.311599 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.312078 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.312710 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.312758 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.312904 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.812893373 +0000 UTC m=+146.461565192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.329931 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rtv5j" podStartSLOduration=121.329913355 podStartE2EDuration="2m1.329913355s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.237862222 +0000 UTC m=+145.886534041" watchObservedRunningTime="2026-01-26 09:08:37.329913355 +0000 UTC m=+145.978585174" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.390347 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.415309 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.415712 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:37.91569603 +0000 UTC m=+146.564367849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.420498 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" podStartSLOduration=120.420485398 podStartE2EDuration="2m0.420485398s" podCreationTimestamp="2026-01-26 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.310871809 +0000 UTC m=+145.959543628" watchObservedRunningTime="2026-01-26 09:08:37.420485398 +0000 UTC m=+146.069157217" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.424113 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndtwx\" (UniqueName: \"kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx\") pod \"community-operators-c4r4r\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.453241 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mvwnc" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.467051 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.518322 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.518649 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.018616662 +0000 UTC m=+146.667288481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.535548 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-skgbv" podStartSLOduration=121.53553063 podStartE2EDuration="2m1.53553063s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.4201813 +0000 UTC m=+146.068853119" watchObservedRunningTime="2026-01-26 09:08:37.53553063 +0000 UTC m=+146.184202449" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.546206 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.628336 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.628727 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.128711903 +0000 UTC m=+146.777383722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.651361 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5crkc" podStartSLOduration=121.651343144 podStartE2EDuration="2m1.651343144s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.645065107 +0000 UTC m=+146.293736926" watchObservedRunningTime="2026-01-26 09:08:37.651343144 +0000 UTC m=+146.300014963" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.730517 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.731000 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.230988807 +0000 UTC m=+146.879660626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.773898 4827 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cxjsh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.773967 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" podUID="a369544e-862d-41b6-928a-f1295ad7e93c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.797266 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ztlnq" podStartSLOduration=120.797251536 podStartE2EDuration="2m0.797251536s" podCreationTimestamp="2026-01-26 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.796325801 +0000 UTC m=+146.444997620" watchObservedRunningTime="2026-01-26 09:08:37.797251536 +0000 UTC m=+146.445923355" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.837093 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.837423 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.337407771 +0000 UTC m=+146.986079590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: W0126 09:08:37.938100 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19386a1_51f4_4396_b49d_4ee6974c1126.slice/crio-aae71ce3fa9bd19b31c88225b4c79df51dcd52d822e6a74f05a0b2ca4f3dd330 WatchSource:0}: Error finding container aae71ce3fa9bd19b31c88225b4c79df51dcd52d822e6a74f05a0b2ca4f3dd330: Status 404 returned error can't find the container with id aae71ce3fa9bd19b31c88225b4c79df51dcd52d822e6a74f05a0b2ca4f3dd330 Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.938235 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vbcq5" podStartSLOduration=120.938217126 podStartE2EDuration="2m0.938217126s" podCreationTimestamp="2026-01-26 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.852875521 +0000 UTC m=+146.501547340" watchObservedRunningTime="2026-01-26 09:08:37.938217126 +0000 UTC m=+146.586888945" Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.938490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:37 crc kubenswrapper[4827]: E0126 09:08:37.938923 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.438909154 +0000 UTC m=+147.087580983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.940687 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:08:37 crc kubenswrapper[4827]: I0126 09:08:37.951042 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dw5zn" podStartSLOduration=121.951022297 podStartE2EDuration="2m1.951022297s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:37.945999913 +0000 UTC m=+146.594671732" watchObservedRunningTime="2026-01-26 09:08:37.951022297 +0000 UTC m=+146.599694116" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.039266 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.039764 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.53974799 +0000 UTC m=+147.188419809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.052415 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:38 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:38 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:38 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.052453 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.101559 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6rb7x" event={"ID":"ed7d25d6-390f-45d1-ab3e-af28799a9a70","Type":"ContainerStarted","Data":"01e1113f870d6ea3a49363d303237280d222c7018c18909accecb1d68b38f822"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.102326 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.128767 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" event={"ID":"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef","Type":"ContainerStarted","Data":"a6c242a3c79c6ca5b05dce835b0796a39ac7627005ea317a22cfce2e6bf77936"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.130125 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" event={"ID":"8a1d3fee-212c-4628-b549-1c4d3e4cd0a2","Type":"ContainerStarted","Data":"06bea1a2e091449c9e6fc3bed1b5a87cad557dea2287fd6993f9939cb28b49bc"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.142714 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.144690 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.145034 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.645023744 +0000 UTC m=+147.293695563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.175927 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" event={"ID":"3d1327f0-1810-452b-a195-b40a94c96326","Type":"ContainerStarted","Data":"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.176908 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.188441 4827 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rkgr6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.188499 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.230485 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" event={"ID":"4026cd0f-c59b-4d79-be00-89bf2fd4ba84","Type":"ContainerStarted","Data":"63cd756f0eb28556058540309382392b071639948f91f0eca761de6ff03888c5"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.230528 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.249175 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.250535 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.750514833 +0000 UTC m=+147.399186652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.261147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerStarted","Data":"aae71ce3fa9bd19b31c88225b4c79df51dcd52d822e6a74f05a0b2ca4f3dd330"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.264215 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cnfxn" podStartSLOduration=122.264197896 podStartE2EDuration="2m2.264197896s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.247449092 +0000 UTC m=+146.896120921" watchObservedRunningTime="2026-01-26 09:08:38.264197896 +0000 UTC m=+146.912869715" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.269726 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-slntw" event={"ID":"53459cff-b8c1-495b-8d5e-49d54a77fb30","Type":"ContainerStarted","Data":"9c01a8563f13cd8ddc8373a0fca082a5eafcbf04aaaddb364a6cfd4847b72258"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.300130 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" event={"ID":"abf06eac-0589-4c69-9244-c9b5b35e0356","Type":"ContainerStarted","Data":"22aacbd177d5ac1513803b7928be1d23e5069e316f3d3d1ddf98736dfdbc2228"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.316894 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" event={"ID":"1918f42b-ecd2-4800-85ab-fbc705acccd7","Type":"ContainerStarted","Data":"f639dd81b8c8e46af66999d36dcde7bdbbbc76af553bf5ffc9e28c22b6200daa"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.353460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.357004 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.856990548 +0000 UTC m=+147.505662357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.371250 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" event={"ID":"8aa6f77d-4bfc-4696-a9ed-6d7ea42d1a01","Type":"ContainerStarted","Data":"cc78cc9540098620ceef7f0fa1f12502e63bf19537d4d069f4c79c7bafbdb782"} Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.376561 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsztb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.376603 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.379001 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-2vwz5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.379025 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2vwz5" podUID="d4f90fc1-5287-4e23-9f4a-4e194db3610b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.454227 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.458080 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:38.958060761 +0000 UTC m=+147.606732580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.528700 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.529995 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.531923 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" podStartSLOduration=122.5319067 podStartE2EDuration="2m2.5319067s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.525417929 +0000 UTC m=+147.174089748" watchObservedRunningTime="2026-01-26 09:08:38.5319067 +0000 UTC m=+147.180578519" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.546966 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.552339 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.561312 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.561364 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.561404 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.561440 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.561463 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.561760 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.061745733 +0000 UTC m=+147.710417552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.575912 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.576854 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.585862 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.591423 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" podStartSLOduration=122.591402219 podStartE2EDuration="2m2.591402219s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.586151259 +0000 UTC m=+147.234823098" watchObservedRunningTime="2026-01-26 09:08:38.591402219 +0000 UTC m=+147.240074038" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.606901 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.671467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.671722 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.671765 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.671799 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlhp\" (UniqueName: \"kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.671926 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.171911436 +0000 UTC m=+147.820583255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.683316 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.710074 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" podStartSLOduration=122.710061377 podStartE2EDuration="2m2.710061377s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.709832211 +0000 UTC m=+147.358504030" watchObservedRunningTime="2026-01-26 09:08:38.710061377 +0000 UTC m=+147.358733196" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.773522 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.773587 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.773624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.773666 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwlhp\" (UniqueName: \"kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.774409 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.274395685 +0000 UTC m=+147.923067504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.778143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.778388 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.820001 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.873903 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.874764 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.875191 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.375174419 +0000 UTC m=+148.023846238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.902237 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.903229 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-db426" podStartSLOduration=122.903209653 podStartE2EDuration="2m2.903209653s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.79455258 +0000 UTC m=+147.443224399" watchObservedRunningTime="2026-01-26 09:08:38.903209653 +0000 UTC m=+147.551881472" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.905784 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.906967 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.907774 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxjsh" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.964994 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" podStartSLOduration=122.964977522 podStartE2EDuration="2m2.964977522s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:38.928528905 +0000 UTC m=+147.577200724" watchObservedRunningTime="2026-01-26 09:08:38.964977522 +0000 UTC m=+147.613649341" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.970413 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.972281 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.975755 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwlhp\" (UniqueName: \"kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp\") pod \"redhat-marketplace-8fr9c\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.976493 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.976530 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.976548 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxw4\" (UniqueName: \"kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:38 crc kubenswrapper[4827]: I0126 09:08:38.976596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:38 crc kubenswrapper[4827]: E0126 09:08:38.977106 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.477092824 +0000 UTC m=+148.125764723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.004328 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-52lj8" podStartSLOduration=123.004313445 podStartE2EDuration="2m3.004313445s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:39.003927445 +0000 UTC m=+147.652599274" watchObservedRunningTime="2026-01-26 09:08:39.004313445 +0000 UTC m=+147.652985264" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.047428 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:39 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:39 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:39 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.047493 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.087628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.088110 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.088133 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qxw4\" (UniqueName: \"kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.088179 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.088552 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.088609 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.588597283 +0000 UTC m=+148.237269102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.088813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.116738 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-slntw" podStartSLOduration=123.116719939 podStartE2EDuration="2m3.116719939s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:39.087046771 +0000 UTC m=+147.735718590" watchObservedRunningTime="2026-01-26 09:08:39.116719939 +0000 UTC m=+147.765391758" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.119617 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.141567 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qxw4\" (UniqueName: \"kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4\") pod \"redhat-marketplace-s2hh2\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.157131 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.159951 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6rb7x" podStartSLOduration=11.159935566 podStartE2EDuration="11.159935566s" podCreationTimestamp="2026-01-26 09:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:39.159322499 +0000 UTC m=+147.807994318" watchObservedRunningTime="2026-01-26 09:08:39.159935566 +0000 UTC m=+147.808607385" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.190065 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.190487 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.690474686 +0000 UTC m=+148.339146505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.255138 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.291464 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.291850 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.791833395 +0000 UTC m=+148.440505214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.394339 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.395124 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:39.895112686 +0000 UTC m=+148.543784495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.412558 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-slntw" event={"ID":"53459cff-b8c1-495b-8d5e-49d54a77fb30","Type":"ContainerStarted","Data":"1795b3e4524f7e0de092057cca37cb4ec7d3a7c361a958cfd0950517ace44cb3"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.432211 4827 generic.go:334] "Generic (PLEG): container finished" podID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerID="791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90" exitCode=0 Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.432297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerDied","Data":"791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.477898 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerStarted","Data":"687e43433bd23342308129e66a4779c9be4b3dd630242b3777473623f7f3632f"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.478169 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.508405 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.509524 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.009504402 +0000 UTC m=+148.658176221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.534416 4827 generic.go:334] "Generic (PLEG): container finished" podID="163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" containerID="b7dea8cbea12b61836f220f41d0e2f3dfddbadc1b787a2b8503ca4eb2715bfb7" exitCode=0 Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.534523 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" event={"ID":"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a","Type":"ContainerDied","Data":"b7dea8cbea12b61836f220f41d0e2f3dfddbadc1b787a2b8503ca4eb2715bfb7"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.540158 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" event={"ID":"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef","Type":"ContainerStarted","Data":"900c013b3e521dad5a170f5be91518c9ab91647443275d2a3ee327d0c16aaad3"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.549308 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerStarted","Data":"ff676ca8ec85d909bd64f1ffc3b7dca715537b29f9e19ea9c557c1cd8c0635c2"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.558123 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerStarted","Data":"5143c4c2490ef0e45c3122e2ff4ec294229af9eb74738f7a2664dfbefa0e3b3e"} Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.559825 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsztb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.559873 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.612268 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.623935 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.123919968 +0000 UTC m=+148.772591787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.725522 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.726071 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.226056268 +0000 UTC m=+148.874728087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.837395 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.837702 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.33769002 +0000 UTC m=+148.986361839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.931265 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.932365 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.941509 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:39 crc kubenswrapper[4827]: E0126 09:08:39.941868 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.441850014 +0000 UTC m=+149.090521833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.950028 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 09:08:39 crc kubenswrapper[4827]: I0126 09:08:39.953398 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.042155 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:40 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:40 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:40 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.042221 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.045344 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.045419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwdc\" (UniqueName: \"kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.045455 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.045480 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.045869 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.545858254 +0000 UTC m=+149.194530073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.149168 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.149444 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwdc\" (UniqueName: \"kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.149485 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.149510 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.150158 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.150241 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.650221384 +0000 UTC m=+149.298893203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.150944 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.218527 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwdc\" (UniqueName: \"kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc\") pod \"redhat-operators-2tmph\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.250419 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.250896 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.750883174 +0000 UTC m=+149.399554993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.312510 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.313420 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.327922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.352152 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.352527 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.852509822 +0000 UTC m=+149.501181641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.355344 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.459411 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.459694 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.459719 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.459754 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhct\" (UniqueName: \"kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.460001 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:40.959989314 +0000 UTC m=+149.608661133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565105 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565442 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhct\" (UniqueName: \"kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565532 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565683 4827 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rkgr6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565723 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.565997 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.566074 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.066058028 +0000 UTC m=+149.714729847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.566307 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.598951 4827 generic.go:334] "Generic (PLEG): container finished" podID="a713c948-1995-48a7-9c93-39b5e00934c0" containerID="f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5" exitCode=0 Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.599010 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerDied","Data":"f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.622449 4827 generic.go:334] "Generic (PLEG): container finished" podID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerID="71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3" exitCode=0 Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.622505 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerDied","Data":"71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.654540 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhct\" (UniqueName: \"kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct\") pod \"redhat-operators-rvg5n\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.658951 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8992720467b01877d1806ea5b3daf0aa557f10f8fd5119711b4107027fc693b1"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.658997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"11d6868a46a4b07f622855294e3d525d185470fd70603f47aa6c75c4dca5e77c"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.659490 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.666476 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.666808 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.166797762 +0000 UTC m=+149.815469571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.685417 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.716381 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerID="d3073134841a61f15590f5171294c8a81ef35427dc19befe4988ee1e156a99e6" exitCode=0 Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.716479 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerDied","Data":"d3073134841a61f15590f5171294c8a81ef35427dc19befe4988ee1e156a99e6"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.768323 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.769310 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.269295422 +0000 UTC m=+149.917967231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.790139 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"70f5b224e046d5c092ad08a565ace0c9570da92e6f591dd7d4baf77c93e26276"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.790177 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8d7e4f693cb88457ce2f16fe26ae7aeac6bcd556ba6205a72ba75b5b005b6de3"} Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.842815 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-2vwz5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.842867 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2vwz5" podUID="d4f90fc1-5287-4e23-9f4a-4e194db3610b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.843176 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-2vwz5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.843196 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2vwz5" podUID="d4f90fc1-5287-4e23-9f4a-4e194db3610b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.870272 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.870831 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.370816566 +0000 UTC m=+150.019488385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.967455 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.967858 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.971467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:40 crc kubenswrapper[4827]: E0126 09:08:40.972826 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.472807122 +0000 UTC m=+150.121478941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:40 crc kubenswrapper[4827]: I0126 09:08:40.995057 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.031420 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.038221 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:41 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:41 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:41 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.038271 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.063337 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.063379 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.077243 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.078588 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.578569448 +0000 UTC m=+150.227241267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.121112 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.224941 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.225533 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.725512497 +0000 UTC m=+150.374184316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.288689 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.290113 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.308488 4827 patch_prober.go:28] interesting pod/console-f9d7485db-cnfxn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.308570 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cnfxn" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.334484 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.334820 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.834804807 +0000 UTC m=+150.483476626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.390181 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.437192 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.438375 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:41.938334575 +0000 UTC m=+150.587006394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.494845 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.517724 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.534341 4827 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-st6nr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.534433 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" podUID="8a1d3fee-212c-4628-b549-1c4d3e4cd0a2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.535195 4827 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-st6nr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.535252 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" podUID="8a1d3fee-212c-4628-b549-1c4d3e4cd0a2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.539358 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.539952 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.039933902 +0000 UTC m=+150.688605721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.645255 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.645707 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.145685927 +0000 UTC m=+150.794357746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.749467 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.749799 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.24978766 +0000 UTC m=+150.898459479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.813592 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.814270 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.814293 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.814850 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.829031 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.829232 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.851054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.851431 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.351416747 +0000 UTC m=+151.000088566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.852786 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" event={"ID":"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef","Type":"ContainerStarted","Data":"e1c6d7aaf2214268e89826ba42d2c6127ddf275bc81b797573e79ac160fc3691"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.854987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerStarted","Data":"19906e92ea4bbd5b0b4844b4c4a4f9c37c4428c66658c20dd85ad9469ac507ff"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.876678 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerStarted","Data":"fe038d737b09d4b223b5ad4af166bc44c003c18299bd3434176092929ed018d3"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.932735 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.932772 4827 generic.go:334] "Generic (PLEG): container finished" podID="32023ace-27de-4377-9cfb-27c706ef9205" containerID="62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd" exitCode=0 Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.932795 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerDied","Data":"62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.933121 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerStarted","Data":"51ce757ae6c9980d40d8c9d6cb061989411b239324cefd6122db22e0cdd93138"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.954373 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2438f3280153e4d1afa9733b513636538e9196205d3fffbc8de6bf7c381fee0c"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.954427 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"98adeb84e998d7a96086416156d91be31543b3e713d5f056eff04f114014f398"} Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.960798 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.960876 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.960903 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:41 crc kubenswrapper[4827]: E0126 09:08:41.961658 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.461623471 +0000 UTC m=+151.110295290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:41 crc kubenswrapper[4827]: I0126 09:08:41.975965 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cgbmh" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.048452 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:42 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:42 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:42 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.048489 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.063565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.063610 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbj97\" (UniqueName: \"kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97\") pod \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.063654 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume\") pod \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.063715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume\") pod \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\" (UID: \"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a\") " Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.064075 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.064120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.072340 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.572319989 +0000 UTC m=+151.220991808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.074543 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.083014 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume" (OuterVolumeSpecName: "config-volume") pod "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" (UID: "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.106878 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-st6nr" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.137100 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97" (OuterVolumeSpecName: "kube-api-access-jbj97") pod "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" (UID: "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a"). InnerVolumeSpecName "kube-api-access-jbj97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.149838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.155297 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" (UID: "163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.164781 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.166374 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.166517 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbj97\" (UniqueName: \"kubernetes.io/projected/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-kube-api-access-jbj97\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.166536 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.166564 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.167064 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.66697992 +0000 UTC m=+151.315651739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.275542 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.276392 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.776378944 +0000 UTC m=+151.425050763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.276973 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.276996 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.330735 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.383196 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.383537 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.883526057 +0000 UTC m=+151.532197876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.484140 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.484482 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.984466955 +0000 UTC m=+151.633138774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.484518 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.484793 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:42.984786354 +0000 UTC m=+151.633458173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.505728 4827 patch_prober.go:28] interesting pod/apiserver-76f77b778f-slntw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]log ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]etcd ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/max-in-flight-filter ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 09:08:42 crc kubenswrapper[4827]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 09:08:42 crc kubenswrapper[4827]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/openshift.io-startinformers ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 09:08:42 crc kubenswrapper[4827]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 09:08:42 crc kubenswrapper[4827]: livez check failed Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.505788 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-slntw" podUID="53459cff-b8c1-495b-8d5e-49d54a77fb30" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.586164 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.586575 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.086555844 +0000 UTC m=+151.735227663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.694345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.694900 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.194884399 +0000 UTC m=+151.843556218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.795849 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.795969 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.29594549 +0000 UTC m=+151.944617319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.796192 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.796549 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.296539016 +0000 UTC m=+151.945210835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.899336 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.899559 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.399528739 +0000 UTC m=+152.048200568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:42 crc kubenswrapper[4827]: I0126 09:08:42.899950 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:42 crc kubenswrapper[4827]: E0126 09:08:42.900294 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.400279279 +0000 UTC m=+152.048951098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.003989 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:43 crc kubenswrapper[4827]: E0126 09:08:43.004299 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.504281938 +0000 UTC m=+152.152953757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.020740 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" event={"ID":"163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a","Type":"ContainerDied","Data":"17e0de10e141340322c10470c460107017d8bc52d9a1223a6ecbfafd6da391c8"} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.020778 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e0de10e141340322c10470c460107017d8bc52d9a1223a6ecbfafd6da391c8" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.020860 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.036030 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:43 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:43 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:43 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.036115 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.077154 4827 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.105345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:43 crc kubenswrapper[4827]: E0126 09:08:43.105741 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.605725991 +0000 UTC m=+152.254397810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.116905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" event={"ID":"c41f5446-b61b-4e0c-bf6a-373f4df1b8ef","Type":"ContainerStarted","Data":"41b9eb52c45da5dc61c5cfc12613ca61b1be9838cf58ccd086473dbbf940f29c"} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.174998 4827 generic.go:334] "Generic (PLEG): container finished" podID="82835169-3dce-4182-8104-c3b09cc8e11c" containerID="7936ba4c155b33211c8b674be8c6ddcd0ca864b46fb3c130b5c3f2cf9dc095b1" exitCode=0 Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.175080 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerDied","Data":"7936ba4c155b33211c8b674be8c6ddcd0ca864b46fb3c130b5c3f2cf9dc095b1"} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.207320 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:43 crc kubenswrapper[4827]: E0126 09:08:43.207553 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.707529732 +0000 UTC m=+152.356201551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.207673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:43 crc kubenswrapper[4827]: E0126 09:08:43.208045 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 09:08:43.708017825 +0000 UTC m=+152.356689644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll4jw" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.229740 4827 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T09:08:43.077174973Z","Handler":null,"Name":""} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.238115 4827 generic.go:334] "Generic (PLEG): container finished" podID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerID="f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635" exitCode=0 Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.238226 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerDied","Data":"f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635"} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.239699 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8bdhj" podStartSLOduration=15.239678275 podStartE2EDuration="15.239678275s" podCreationTimestamp="2026-01-26 09:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:43.166281787 +0000 UTC m=+151.814953606" watchObservedRunningTime="2026-01-26 09:08:43.239678275 +0000 UTC m=+151.888350104" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.252250 4827 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.252432 4827 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.287488 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerStarted","Data":"986e1150f774b84d4156ecf3ba4ebf923ab540326fbc1bb87a012afdb6b1e000"} Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.311804 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.425367 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.472943 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.532700 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.730520 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.893224 4827 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.893297 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:43 crc kubenswrapper[4827]: I0126 09:08:43.947439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll4jw\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.034436 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:44 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:44 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:44 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.034491 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.236879 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.301361 4827 generic.go:334] "Generic (PLEG): container finished" podID="3b937570-c0d6-42de-abf9-d4699d59139e" containerID="12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506" exitCode=0 Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.301503 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerDied","Data":"12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506"} Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.304277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e91dd592-8b1f-467f-9b98-965c90129b84","Type":"ContainerStarted","Data":"4da6427684ca73ac8296c107a91be83ebc15ce9598ac5151d8723d2216c51309"} Jan 26 09:08:44 crc kubenswrapper[4827]: I0126 09:08:44.780953 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:08:44 crc kubenswrapper[4827]: W0126 09:08:44.820901 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73eaaf34_a59b_4525_8a07_bd177f7b0995.slice/crio-14baccd4ae15c62eaa653d888b6ce3824f2c3137c2c94a7c5571516258ad4eb3 WatchSource:0}: Error finding container 14baccd4ae15c62eaa653d888b6ce3824f2c3137c2c94a7c5571516258ad4eb3: Status 404 returned error can't find the container with id 14baccd4ae15c62eaa653d888b6ce3824f2c3137c2c94a7c5571516258ad4eb3 Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.035701 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:45 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:45 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:45 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.036069 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.319421 4827 generic.go:334] "Generic (PLEG): container finished" podID="e91dd592-8b1f-467f-9b98-965c90129b84" containerID="f58703f97cf9c1c4df321f743c9ac7f1fc1f69e2dd4dc5b9430a377a0f49ccd7" exitCode=0 Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.319496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e91dd592-8b1f-467f-9b98-965c90129b84","Type":"ContainerDied","Data":"f58703f97cf9c1c4df321f743c9ac7f1fc1f69e2dd4dc5b9430a377a0f49ccd7"} Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.323410 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" event={"ID":"73eaaf34-a59b-4525-8a07-bd177f7b0995","Type":"ContainerStarted","Data":"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc"} Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.323452 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" event={"ID":"73eaaf34-a59b-4525-8a07-bd177f7b0995","Type":"ContainerStarted","Data":"14baccd4ae15c62eaa653d888b6ce3824f2c3137c2c94a7c5571516258ad4eb3"} Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.323563 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.361385 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" podStartSLOduration=129.361364646 podStartE2EDuration="2m9.361364646s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:45.352532402 +0000 UTC m=+154.001204231" watchObservedRunningTime="2026-01-26 09:08:45.361364646 +0000 UTC m=+154.010036455" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.990483 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 09:08:45 crc kubenswrapper[4827]: E0126 09:08:45.991295 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" containerName="collect-profiles" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.991337 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" containerName="collect-profiles" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.991550 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" containerName="collect-profiles" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.992236 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.997418 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 09:08:45 crc kubenswrapper[4827]: I0126 09:08:45.998812 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.000094 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.033591 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:46 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:46 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:46 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.033671 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.073537 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.079172 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-slntw" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.102266 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.102340 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.203149 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.203329 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.204836 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.287120 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.339903 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.788047 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.799288 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6rb7x" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.877422 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.933023 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access\") pod \"e91dd592-8b1f-467f-9b98-965c90129b84\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.933135 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir\") pod \"e91dd592-8b1f-467f-9b98-965c90129b84\" (UID: \"e91dd592-8b1f-467f-9b98-965c90129b84\") " Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.933831 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e91dd592-8b1f-467f-9b98-965c90129b84" (UID: "e91dd592-8b1f-467f-9b98-965c90129b84"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:08:46 crc kubenswrapper[4827]: I0126 09:08:46.948864 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e91dd592-8b1f-467f-9b98-965c90129b84" (UID: "e91dd592-8b1f-467f-9b98-965c90129b84"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.034384 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e91dd592-8b1f-467f-9b98-965c90129b84-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.034415 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e91dd592-8b1f-467f-9b98-965c90129b84-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.034805 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:47 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:47 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:47 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.034844 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.355055 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.355369 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e91dd592-8b1f-467f-9b98-965c90129b84","Type":"ContainerDied","Data":"4da6427684ca73ac8296c107a91be83ebc15ce9598ac5151d8723d2216c51309"} Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.355391 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4da6427684ca73ac8296c107a91be83ebc15ce9598ac5151d8723d2216c51309" Jan 26 09:08:47 crc kubenswrapper[4827]: I0126 09:08:47.356296 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9da4d158-c2b2-4c71-b4ad-173a6e339c7c","Type":"ContainerStarted","Data":"482607f2fa2976a7240f7ed173f9fbff9d9cec3707d86c1e95faf5ca1fa3ec48"} Jan 26 09:08:48 crc kubenswrapper[4827]: I0126 09:08:48.033757 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:48 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:48 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:48 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:48 crc kubenswrapper[4827]: I0126 09:08:48.034065 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:48 crc kubenswrapper[4827]: I0126 09:08:48.401519 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9da4d158-c2b2-4c71-b4ad-173a6e339c7c","Type":"ContainerStarted","Data":"ee08b10a71e4ff2accf731db23fd1eb0a14b995b6d8a7b12c845e716e5610b50"} Jan 26 09:08:48 crc kubenswrapper[4827]: I0126 09:08:48.418698 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.418681694 podStartE2EDuration="3.418681694s" podCreationTimestamp="2026-01-26 09:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:08:48.418476119 +0000 UTC m=+157.067147948" watchObservedRunningTime="2026-01-26 09:08:48.418681694 +0000 UTC m=+157.067353513" Jan 26 09:08:49 crc kubenswrapper[4827]: I0126 09:08:49.034405 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:49 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:49 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:49 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:49 crc kubenswrapper[4827]: I0126 09:08:49.034716 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:49 crc kubenswrapper[4827]: I0126 09:08:49.422498 4827 generic.go:334] "Generic (PLEG): container finished" podID="9da4d158-c2b2-4c71-b4ad-173a6e339c7c" containerID="ee08b10a71e4ff2accf731db23fd1eb0a14b995b6d8a7b12c845e716e5610b50" exitCode=0 Jan 26 09:08:49 crc kubenswrapper[4827]: I0126 09:08:49.422540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9da4d158-c2b2-4c71-b4ad-173a6e339c7c","Type":"ContainerDied","Data":"ee08b10a71e4ff2accf731db23fd1eb0a14b995b6d8a7b12c845e716e5610b50"} Jan 26 09:08:50 crc kubenswrapper[4827]: I0126 09:08:50.034912 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:50 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:50 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:50 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:50 crc kubenswrapper[4827]: I0126 09:08:50.034973 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:50 crc kubenswrapper[4827]: I0126 09:08:50.821739 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:50 crc kubenswrapper[4827]: I0126 09:08:50.845173 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2vwz5" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.022084 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access\") pod \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.022247 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir\") pod \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\" (UID: \"9da4d158-c2b2-4c71-b4ad-173a6e339c7c\") " Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.022759 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9da4d158-c2b2-4c71-b4ad-173a6e339c7c" (UID: "9da4d158-c2b2-4c71-b4ad-173a6e339c7c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.029892 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9da4d158-c2b2-4c71-b4ad-173a6e339c7c" (UID: "9da4d158-c2b2-4c71-b4ad-173a6e339c7c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.034109 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:51 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:51 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:51 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.034164 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.130376 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.130418 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da4d158-c2b2-4c71-b4ad-173a6e339c7c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.287851 4827 patch_prober.go:28] interesting pod/console-f9d7485db-cnfxn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.287991 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cnfxn" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.456050 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9da4d158-c2b2-4c71-b4ad-173a6e339c7c","Type":"ContainerDied","Data":"482607f2fa2976a7240f7ed173f9fbff9d9cec3707d86c1e95faf5ca1fa3ec48"} Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.456104 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482607f2fa2976a7240f7ed173f9fbff9d9cec3707d86c1e95faf5ca1fa3ec48" Jan 26 09:08:51 crc kubenswrapper[4827]: I0126 09:08:51.456121 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 09:08:52 crc kubenswrapper[4827]: I0126 09:08:52.033573 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:52 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:52 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:52 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:52 crc kubenswrapper[4827]: I0126 09:08:52.033897 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:53 crc kubenswrapper[4827]: I0126 09:08:53.034871 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:53 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:53 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:53 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:53 crc kubenswrapper[4827]: I0126 09:08:53.034963 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:54 crc kubenswrapper[4827]: I0126 09:08:54.033649 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:54 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:54 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:54 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:54 crc kubenswrapper[4827]: I0126 09:08:54.033725 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:55 crc kubenswrapper[4827]: I0126 09:08:55.033674 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:55 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:55 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:55 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:55 crc kubenswrapper[4827]: I0126 09:08:55.033983 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:56 crc kubenswrapper[4827]: I0126 09:08:56.036141 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:56 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:56 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:56 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:56 crc kubenswrapper[4827]: I0126 09:08:56.036198 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:57 crc kubenswrapper[4827]: I0126 09:08:57.034409 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:57 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:57 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:57 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:57 crc kubenswrapper[4827]: I0126 09:08:57.034508 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:58 crc kubenswrapper[4827]: I0126 09:08:58.033596 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:58 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:58 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:58 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:58 crc kubenswrapper[4827]: I0126 09:08:58.033676 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:59 crc kubenswrapper[4827]: I0126 09:08:59.000804 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:59 crc kubenswrapper[4827]: I0126 09:08:59.010583 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9bc714d-5eac-4b0e-8832-f65f57bffa1e-metrics-certs\") pod \"network-metrics-daemon-k927z\" (UID: \"a9bc714d-5eac-4b0e-8832-f65f57bffa1e\") " pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:08:59 crc kubenswrapper[4827]: I0126 09:08:59.038234 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:08:59 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:08:59 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:08:59 crc kubenswrapper[4827]: healthz check failed Jan 26 09:08:59 crc kubenswrapper[4827]: I0126 09:08:59.038326 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:08:59 crc kubenswrapper[4827]: I0126 09:08:59.226043 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k927z" Jan 26 09:09:00 crc kubenswrapper[4827]: I0126 09:09:00.033216 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:00 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:00 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:00 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:00 crc kubenswrapper[4827]: I0126 09:09:00.033279 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:01 crc kubenswrapper[4827]: I0126 09:09:01.044177 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:01 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:01 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:01 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:01 crc kubenswrapper[4827]: I0126 09:09:01.044240 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:01 crc kubenswrapper[4827]: I0126 09:09:01.287326 4827 patch_prober.go:28] interesting pod/console-f9d7485db-cnfxn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 09:09:01 crc kubenswrapper[4827]: I0126 09:09:01.287397 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cnfxn" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 09:09:02 crc kubenswrapper[4827]: I0126 09:09:02.034490 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:02 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:02 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:02 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:02 crc kubenswrapper[4827]: I0126 09:09:02.034558 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:03 crc kubenswrapper[4827]: I0126 09:09:03.034455 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:03 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:03 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:03 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:03 crc kubenswrapper[4827]: I0126 09:09:03.034540 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:04 crc kubenswrapper[4827]: I0126 09:09:04.033839 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:04 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:04 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:04 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:04 crc kubenswrapper[4827]: I0126 09:09:04.034083 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:04 crc kubenswrapper[4827]: I0126 09:09:04.242592 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:09:05 crc kubenswrapper[4827]: I0126 09:09:05.033011 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:05 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:05 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:05 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:05 crc kubenswrapper[4827]: I0126 09:09:05.033102 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:06 crc kubenswrapper[4827]: I0126 09:09:06.033514 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:06 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:06 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:06 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:06 crc kubenswrapper[4827]: I0126 09:09:06.033571 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:07 crc kubenswrapper[4827]: I0126 09:09:07.032882 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:07 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:07 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:07 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:07 crc kubenswrapper[4827]: I0126 09:09:07.032948 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:08 crc kubenswrapper[4827]: I0126 09:09:08.033835 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:08 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:08 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:08 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:08 crc kubenswrapper[4827]: I0126 09:09:08.034118 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:09 crc kubenswrapper[4827]: I0126 09:09:09.039777 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:09 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:09 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:09 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:09 crc kubenswrapper[4827]: I0126 09:09:09.040052 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:10 crc kubenswrapper[4827]: I0126 09:09:10.033007 4827 patch_prober.go:28] interesting pod/router-default-5444994796-5724v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 09:09:10 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 26 09:09:10 crc kubenswrapper[4827]: [+]process-running ok Jan 26 09:09:10 crc kubenswrapper[4827]: healthz check failed Jan 26 09:09:10 crc kubenswrapper[4827]: I0126 09:09:10.033071 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5724v" podUID="c082a1b4-a8cb-4bd5-9034-1678368030c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 09:09:11 crc kubenswrapper[4827]: I0126 09:09:11.034328 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:09:11 crc kubenswrapper[4827]: I0126 09:09:11.037675 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5724v" Jan 26 09:09:11 crc kubenswrapper[4827]: I0126 09:09:11.288831 4827 patch_prober.go:28] interesting pod/console-f9d7485db-cnfxn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 09:09:11 crc kubenswrapper[4827]: I0126 09:09:11.288896 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cnfxn" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 09:09:12 crc kubenswrapper[4827]: I0126 09:09:12.022913 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sdrc6" Jan 26 09:09:12 crc kubenswrapper[4827]: I0126 09:09:12.268879 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:09:12 crc kubenswrapper[4827]: I0126 09:09:12.268940 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:09:18 crc kubenswrapper[4827]: I0126 09:09:18.612320 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 09:09:19 crc kubenswrapper[4827]: E0126 09:09:19.059616 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 09:09:19 crc kubenswrapper[4827]: E0126 09:09:19.060436 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gwdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2tmph_openshift-marketplace(b1899b44-9f9b-4212-ab42-01ffbb3bc5d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:19 crc kubenswrapper[4827]: E0126 09:09:19.061718 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2tmph" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.123890 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 09:09:21 crc kubenswrapper[4827]: E0126 09:09:21.125216 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91dd592-8b1f-467f-9b98-965c90129b84" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.125235 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91dd592-8b1f-467f-9b98-965c90129b84" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: E0126 09:09:21.125245 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da4d158-c2b2-4c71-b4ad-173a6e339c7c" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.125251 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da4d158-c2b2-4c71-b4ad-173a6e339c7c" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.125355 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91dd592-8b1f-467f-9b98-965c90129b84" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.125364 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da4d158-c2b2-4c71-b4ad-173a6e339c7c" containerName="pruner" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.125733 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.127971 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.127971 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.132953 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.136591 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.136666 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.237965 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.238026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.238379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.258311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.290552 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.294086 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:09:21 crc kubenswrapper[4827]: I0126 09:09:21.443713 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:21 crc kubenswrapper[4827]: E0126 09:09:21.950860 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2tmph" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.161234 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.161413 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hrvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vx7kr_openshift-marketplace(ca645bb3-362d-489e-bb7b-59f9441ca7ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.163036 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vx7kr" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.326371 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.326525 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nhct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rvg5n_openshift-marketplace(3b937570-c0d6-42de-abf9-d4699d59139e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:22 crc kubenswrapper[4827]: E0126 09:09:22.328093 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rvg5n" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.773137 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rvg5n" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.773293 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vx7kr" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.878935 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.879318 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwlhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8fr9c_openshift-marketplace(32023ace-27de-4377-9cfb-27c706ef9205): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.879679 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.879806 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46mdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tl6c2_openshift-marketplace(f19386a1-51f4-4396-b49d-4ee6974c1126): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.881461 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8fr9c" podUID="32023ace-27de-4377-9cfb-27c706ef9205" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.882465 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tl6c2" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.907726 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.907884 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsjj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rv4tx_openshift-marketplace(3faf08fa-1553-4b39-b2f3-63f4b2985f4f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.909261 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rv4tx" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.917751 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.917876 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qxw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2hh2_openshift-marketplace(82835169-3dce-4182-8104-c3b09cc8e11c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 09:09:23 crc kubenswrapper[4827]: E0126 09:09:23.919617 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s2hh2" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.251340 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k927z"] Jan 26 09:09:24 crc kubenswrapper[4827]: W0126 09:09:24.254417 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bc714d_5eac_4b0e_8832_f65f57bffa1e.slice/crio-1a315886599987b7615658d7a8cb5459f617d76c448cade4b4ba27d11711cb14 WatchSource:0}: Error finding container 1a315886599987b7615658d7a8cb5459f617d76c448cade4b4ba27d11711cb14: Status 404 returned error can't find the container with id 1a315886599987b7615658d7a8cb5459f617d76c448cade4b4ba27d11711cb14 Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.308439 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 09:09:24 crc kubenswrapper[4827]: W0126 09:09:24.330254 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1809e415_966d_41dd_82db_fc3d84730df7.slice/crio-9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51 WatchSource:0}: Error finding container 9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51: Status 404 returned error can't find the container with id 9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51 Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.667282 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1809e415-966d-41dd-82db-fc3d84730df7","Type":"ContainerStarted","Data":"9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51"} Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.668492 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k927z" event={"ID":"a9bc714d-5eac-4b0e-8832-f65f57bffa1e","Type":"ContainerStarted","Data":"1a315886599987b7615658d7a8cb5459f617d76c448cade4b4ba27d11711cb14"} Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.670533 4827 generic.go:334] "Generic (PLEG): container finished" podID="a713c948-1995-48a7-9c93-39b5e00934c0" containerID="f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a" exitCode=0 Jan 26 09:09:24 crc kubenswrapper[4827]: I0126 09:09:24.670571 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerDied","Data":"f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a"} Jan 26 09:09:24 crc kubenswrapper[4827]: E0126 09:09:24.672555 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8fr9c" podUID="32023ace-27de-4377-9cfb-27c706ef9205" Jan 26 09:09:24 crc kubenswrapper[4827]: E0126 09:09:24.672978 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tl6c2" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" Jan 26 09:09:24 crc kubenswrapper[4827]: E0126 09:09:24.672689 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2hh2" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" Jan 26 09:09:24 crc kubenswrapper[4827]: E0126 09:09:24.676923 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rv4tx" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.686855 4827 generic.go:334] "Generic (PLEG): container finished" podID="1809e415-966d-41dd-82db-fc3d84730df7" containerID="056f7786a0220d77637e311def9e144cddd02374486a47c82a3b7a3bcc1195e5" exitCode=0 Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.686899 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1809e415-966d-41dd-82db-fc3d84730df7","Type":"ContainerDied","Data":"056f7786a0220d77637e311def9e144cddd02374486a47c82a3b7a3bcc1195e5"} Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.691190 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k927z" event={"ID":"a9bc714d-5eac-4b0e-8832-f65f57bffa1e","Type":"ContainerStarted","Data":"adbd8784f461cd9d1056f6e2b19d293f7fa88f3021ecd0a2d61e65c4608946fa"} Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.691245 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k927z" event={"ID":"a9bc714d-5eac-4b0e-8832-f65f57bffa1e","Type":"ContainerStarted","Data":"18650c308b7d602402db33a2be126d3f7a6d7991a9f66adebd52da056f537ac2"} Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.696479 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerStarted","Data":"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172"} Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.722804 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-k927z" podStartSLOduration=169.722784962 podStartE2EDuration="2m49.722784962s" podCreationTimestamp="2026-01-26 09:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:09:25.717619076 +0000 UTC m=+194.366290895" watchObservedRunningTime="2026-01-26 09:09:25.722784962 +0000 UTC m=+194.371456781" Jan 26 09:09:25 crc kubenswrapper[4827]: I0126 09:09:25.741009 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4r4r" podStartSLOduration=4.098904828 podStartE2EDuration="48.740993556s" podCreationTimestamp="2026-01-26 09:08:37 +0000 UTC" firstStartedPulling="2026-01-26 09:08:40.617673268 +0000 UTC m=+149.266345087" lastFinishedPulling="2026-01-26 09:09:25.259761996 +0000 UTC m=+193.908433815" observedRunningTime="2026-01-26 09:09:25.738064888 +0000 UTC m=+194.386736707" watchObservedRunningTime="2026-01-26 09:09:25.740993556 +0000 UTC m=+194.389665375" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.319754 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.320909 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.338005 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.506602 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.506715 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.506752 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.607571 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.607657 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.607723 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.607961 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.608012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.626503 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access\") pod \"installer-9-crc\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:26 crc kubenswrapper[4827]: I0126 09:09:26.643132 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.094490 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 09:09:27 crc kubenswrapper[4827]: W0126 09:09:27.117535 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod949b8545_8ccd_45c6_942d_fccf65af803b.slice/crio-a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515 WatchSource:0}: Error finding container a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515: Status 404 returned error can't find the container with id a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515 Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.118918 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.320040 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1809e415-966d-41dd-82db-fc3d84730df7" (UID: "1809e415-966d-41dd-82db-fc3d84730df7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.319888 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir\") pod \"1809e415-966d-41dd-82db-fc3d84730df7\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.322046 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access\") pod \"1809e415-966d-41dd-82db-fc3d84730df7\" (UID: \"1809e415-966d-41dd-82db-fc3d84730df7\") " Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.322295 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1809e415-966d-41dd-82db-fc3d84730df7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.330703 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1809e415-966d-41dd-82db-fc3d84730df7" (UID: "1809e415-966d-41dd-82db-fc3d84730df7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.423659 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1809e415-966d-41dd-82db-fc3d84730df7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.468873 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.468965 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.709574 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1809e415-966d-41dd-82db-fc3d84730df7","Type":"ContainerDied","Data":"9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51"} Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.709615 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9865c5313fc158238b1d5387d4939d84ce74b179ecb6ec4eed9de190c189fc51" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.709627 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 09:09:27 crc kubenswrapper[4827]: I0126 09:09:27.723179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"949b8545-8ccd-45c6-942d-fccf65af803b","Type":"ContainerStarted","Data":"a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515"} Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.027957 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.059242 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.153062 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.335230 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.752513 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"949b8545-8ccd-45c6-942d-fccf65af803b","Type":"ContainerStarted","Data":"f7b69bc129be9031d7670a100396366de5664f03e5ad168c6f63e33a7fe7b19a"} Jan 26 09:09:32 crc kubenswrapper[4827]: I0126 09:09:32.768431 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.768412531 podStartE2EDuration="6.768412531s" podCreationTimestamp="2026-01-26 09:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:09:32.765661708 +0000 UTC m=+201.414333537" watchObservedRunningTime="2026-01-26 09:09:32.768412531 +0000 UTC m=+201.417084350" Jan 26 09:09:33 crc kubenswrapper[4827]: I0126 09:09:33.760505 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4r4r" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="registry-server" containerID="cri-o://b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172" gracePeriod=2 Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.653501 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.768990 4827 generic.go:334] "Generic (PLEG): container finished" podID="a713c948-1995-48a7-9c93-39b5e00934c0" containerID="b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172" exitCode=0 Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.769030 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerDied","Data":"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172"} Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.769086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4r4r" event={"ID":"a713c948-1995-48a7-9c93-39b5e00934c0","Type":"ContainerDied","Data":"5143c4c2490ef0e45c3122e2ff4ec294229af9eb74738f7a2664dfbefa0e3b3e"} Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.769106 4827 scope.go:117] "RemoveContainer" containerID="b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.769209 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4r4r" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.784515 4827 scope.go:117] "RemoveContainer" containerID="f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.802254 4827 scope.go:117] "RemoveContainer" containerID="f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.822190 4827 scope.go:117] "RemoveContainer" containerID="b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172" Jan 26 09:09:34 crc kubenswrapper[4827]: E0126 09:09:34.824216 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172\": container with ID starting with b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172 not found: ID does not exist" containerID="b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.824258 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172"} err="failed to get container status \"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172\": rpc error: code = NotFound desc = could not find container \"b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172\": container with ID starting with b804efa87424f97ef94d002ab670bd6fc36e9a5156d598a2a242c0170fdff172 not found: ID does not exist" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.824303 4827 scope.go:117] "RemoveContainer" containerID="f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a" Jan 26 09:09:34 crc kubenswrapper[4827]: E0126 09:09:34.824778 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a\": container with ID starting with f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a not found: ID does not exist" containerID="f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.824801 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a"} err="failed to get container status \"f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a\": rpc error: code = NotFound desc = could not find container \"f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a\": container with ID starting with f39260b02137e27cc65bb914cc60ea796c81566524d3a8a5f92d732ee8fae16a not found: ID does not exist" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.824815 4827 scope.go:117] "RemoveContainer" containerID="f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5" Jan 26 09:09:34 crc kubenswrapper[4827]: E0126 09:09:34.825053 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5\": container with ID starting with f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5 not found: ID does not exist" containerID="f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.825181 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5"} err="failed to get container status \"f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5\": rpc error: code = NotFound desc = could not find container \"f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5\": container with ID starting with f14c4ca6a7fdfdd3710218bbdc027c897bed08ef317258002b49db857cb65af5 not found: ID does not exist" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.851770 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities\") pod \"a713c948-1995-48a7-9c93-39b5e00934c0\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.851874 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content\") pod \"a713c948-1995-48a7-9c93-39b5e00934c0\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.851956 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndtwx\" (UniqueName: \"kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx\") pod \"a713c948-1995-48a7-9c93-39b5e00934c0\" (UID: \"a713c948-1995-48a7-9c93-39b5e00934c0\") " Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.854324 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities" (OuterVolumeSpecName: "utilities") pod "a713c948-1995-48a7-9c93-39b5e00934c0" (UID: "a713c948-1995-48a7-9c93-39b5e00934c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.857258 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx" (OuterVolumeSpecName: "kube-api-access-ndtwx") pod "a713c948-1995-48a7-9c93-39b5e00934c0" (UID: "a713c948-1995-48a7-9c93-39b5e00934c0"). InnerVolumeSpecName "kube-api-access-ndtwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.906977 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a713c948-1995-48a7-9c93-39b5e00934c0" (UID: "a713c948-1995-48a7-9c93-39b5e00934c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.953605 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.953660 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndtwx\" (UniqueName: \"kubernetes.io/projected/a713c948-1995-48a7-9c93-39b5e00934c0-kube-api-access-ndtwx\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:34 crc kubenswrapper[4827]: I0126 09:09:34.953671 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a713c948-1995-48a7-9c93-39b5e00934c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:35 crc kubenswrapper[4827]: I0126 09:09:35.108045 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:09:35 crc kubenswrapper[4827]: I0126 09:09:35.111155 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4r4r"] Jan 26 09:09:35 crc kubenswrapper[4827]: I0126 09:09:35.711129 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" path="/var/lib/kubelet/pods/a713c948-1995-48a7-9c93-39b5e00934c0/volumes" Jan 26 09:09:36 crc kubenswrapper[4827]: I0126 09:09:36.780134 4827 generic.go:334] "Generic (PLEG): container finished" podID="32023ace-27de-4377-9cfb-27c706ef9205" containerID="4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986" exitCode=0 Jan 26 09:09:36 crc kubenswrapper[4827]: I0126 09:09:36.780427 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerDied","Data":"4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986"} Jan 26 09:09:36 crc kubenswrapper[4827]: I0126 09:09:36.783396 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerStarted","Data":"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d"} Jan 26 09:09:37 crc kubenswrapper[4827]: I0126 09:09:37.790411 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerStarted","Data":"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de"} Jan 26 09:09:37 crc kubenswrapper[4827]: I0126 09:09:37.794241 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerDied","Data":"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d"} Jan 26 09:09:37 crc kubenswrapper[4827]: I0126 09:09:37.794541 4827 generic.go:334] "Generic (PLEG): container finished" podID="3b937570-c0d6-42de-abf9-d4699d59139e" containerID="96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d" exitCode=0 Jan 26 09:09:37 crc kubenswrapper[4827]: I0126 09:09:37.797301 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerStarted","Data":"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6"} Jan 26 09:09:37 crc kubenswrapper[4827]: I0126 09:09:37.845440 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8fr9c" podStartSLOduration=4.49597796 podStartE2EDuration="59.845424605s" podCreationTimestamp="2026-01-26 09:08:38 +0000 UTC" firstStartedPulling="2026-01-26 09:08:41.950798203 +0000 UTC m=+150.599470022" lastFinishedPulling="2026-01-26 09:09:37.300244848 +0000 UTC m=+205.948916667" observedRunningTime="2026-01-26 09:09:37.826682166 +0000 UTC m=+206.475353985" watchObservedRunningTime="2026-01-26 09:09:37.845424605 +0000 UTC m=+206.494096424" Jan 26 09:09:38 crc kubenswrapper[4827]: I0126 09:09:38.802955 4827 generic.go:334] "Generic (PLEG): container finished" podID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerID="c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6" exitCode=0 Jan 26 09:09:38 crc kubenswrapper[4827]: I0126 09:09:38.803014 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerDied","Data":"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6"} Jan 26 09:09:38 crc kubenswrapper[4827]: I0126 09:09:38.808277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerStarted","Data":"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a"} Jan 26 09:09:38 crc kubenswrapper[4827]: I0126 09:09:38.810818 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerStarted","Data":"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b"} Jan 26 09:09:38 crc kubenswrapper[4827]: I0126 09:09:38.842529 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvg5n" podStartSLOduration=3.925886599 podStartE2EDuration="58.842505005s" podCreationTimestamp="2026-01-26 09:08:40 +0000 UTC" firstStartedPulling="2026-01-26 09:08:43.295828375 +0000 UTC m=+151.944500194" lastFinishedPulling="2026-01-26 09:09:38.212446781 +0000 UTC m=+206.861118600" observedRunningTime="2026-01-26 09:09:38.836877525 +0000 UTC m=+207.485549354" watchObservedRunningTime="2026-01-26 09:09:38.842505005 +0000 UTC m=+207.491176814" Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.157911 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.159318 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.195423 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.818944 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerStarted","Data":"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c"} Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.821245 4827 generic.go:334] "Generic (PLEG): container finished" podID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerID="bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a" exitCode=0 Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.821303 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerDied","Data":"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a"} Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.824248 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerStarted","Data":"89bee7923d21e6f4dae4b0f2f2a75b1802ea78cad0b7a5b784346fcb278fcf2a"} Jan 26 09:09:39 crc kubenswrapper[4827]: I0126 09:09:39.846944 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rv4tx" podStartSLOduration=5.243508977 podStartE2EDuration="1m3.846898027s" podCreationTimestamp="2026-01-26 09:08:36 +0000 UTC" firstStartedPulling="2026-01-26 09:08:40.629919743 +0000 UTC m=+149.278591562" lastFinishedPulling="2026-01-26 09:09:39.233308793 +0000 UTC m=+207.881980612" observedRunningTime="2026-01-26 09:09:39.841525904 +0000 UTC m=+208.490197723" watchObservedRunningTime="2026-01-26 09:09:39.846898027 +0000 UTC m=+208.495569846" Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.685948 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.686034 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.830383 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerID="89bee7923d21e6f4dae4b0f2f2a75b1802ea78cad0b7a5b784346fcb278fcf2a" exitCode=0 Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.830538 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerDied","Data":"89bee7923d21e6f4dae4b0f2f2a75b1802ea78cad0b7a5b784346fcb278fcf2a"} Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.850250 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerStarted","Data":"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373"} Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.858076 4827 generic.go:334] "Generic (PLEG): container finished" podID="82835169-3dce-4182-8104-c3b09cc8e11c" containerID="74b823078ae921ba0bdf833acb5fc63f714dfa192e603a3808f5adc81b86b354" exitCode=0 Jan 26 09:09:40 crc kubenswrapper[4827]: I0126 09:09:40.859009 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerDied","Data":"74b823078ae921ba0bdf833acb5fc63f714dfa192e603a3808f5adc81b86b354"} Jan 26 09:09:41 crc kubenswrapper[4827]: I0126 09:09:41.864910 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerStarted","Data":"b215a7f0628416056b2e1c5c3fb7f1a0c83060f673f97aab444ad03fa384849d"} Jan 26 09:09:41 crc kubenswrapper[4827]: I0126 09:09:41.868260 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerStarted","Data":"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821"} Jan 26 09:09:41 crc kubenswrapper[4827]: I0126 09:09:41.870346 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerStarted","Data":"666617bab3d2b52175add5027276b5047b0603fe7c3d61787ff23f22ad63b607"} Jan 26 09:09:41 crc kubenswrapper[4827]: I0126 09:09:41.872255 4827 generic.go:334] "Generic (PLEG): container finished" podID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerID="f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373" exitCode=0 Jan 26 09:09:41 crc kubenswrapper[4827]: I0126 09:09:41.872350 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerDied","Data":"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373"} Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.082422 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvg5n" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="registry-server" probeResult="failure" output=< Jan 26 09:09:42 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:09:42 crc kubenswrapper[4827]: > Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.133174 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vx7kr" podStartSLOduration=5.527214785 podStartE2EDuration="1m6.133152268s" podCreationTimestamp="2026-01-26 09:08:36 +0000 UTC" firstStartedPulling="2026-01-26 09:08:40.751827418 +0000 UTC m=+149.400499237" lastFinishedPulling="2026-01-26 09:09:41.357764901 +0000 UTC m=+210.006436720" observedRunningTime="2026-01-26 09:09:42.131162799 +0000 UTC m=+210.779834628" watchObservedRunningTime="2026-01-26 09:09:42.133152268 +0000 UTC m=+210.781824087" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.134605 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s2hh2" podStartSLOduration=5.86103208 podStartE2EDuration="1m4.134596493s" podCreationTimestamp="2026-01-26 09:08:38 +0000 UTC" firstStartedPulling="2026-01-26 09:08:43.212539405 +0000 UTC m=+151.861211224" lastFinishedPulling="2026-01-26 09:09:41.486103818 +0000 UTC m=+210.134775637" observedRunningTime="2026-01-26 09:09:42.109171021 +0000 UTC m=+210.757842860" watchObservedRunningTime="2026-01-26 09:09:42.134596493 +0000 UTC m=+210.783268312" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.268463 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.268532 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.268583 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.269153 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.269228 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094" gracePeriod=600 Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.289393 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tmph" podStartSLOduration=5.965053745 podStartE2EDuration="1m3.289375683s" podCreationTimestamp="2026-01-26 09:08:39 +0000 UTC" firstStartedPulling="2026-01-26 09:08:43.249584368 +0000 UTC m=+151.898256187" lastFinishedPulling="2026-01-26 09:09:40.573906306 +0000 UTC m=+209.222578125" observedRunningTime="2026-01-26 09:09:42.161941073 +0000 UTC m=+210.810612892" watchObservedRunningTime="2026-01-26 09:09:42.289375683 +0000 UTC m=+210.938047502" Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.880294 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094" exitCode=0 Jan 26 09:09:42 crc kubenswrapper[4827]: I0126 09:09:42.880344 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094"} Jan 26 09:09:43 crc kubenswrapper[4827]: I0126 09:09:43.914885 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerStarted","Data":"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e"} Jan 26 09:09:43 crc kubenswrapper[4827]: I0126 09:09:43.943940 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tl6c2" podStartSLOduration=4.660175698 podStartE2EDuration="1m7.94392338s" podCreationTimestamp="2026-01-26 09:08:36 +0000 UTC" firstStartedPulling="2026-01-26 09:08:39.477925684 +0000 UTC m=+148.126597503" lastFinishedPulling="2026-01-26 09:09:42.761673366 +0000 UTC m=+211.410345185" observedRunningTime="2026-01-26 09:09:43.943905429 +0000 UTC m=+212.592577258" watchObservedRunningTime="2026-01-26 09:09:43.94392338 +0000 UTC m=+212.592595189" Jan 26 09:09:44 crc kubenswrapper[4827]: I0126 09:09:44.921678 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1"} Jan 26 09:09:46 crc kubenswrapper[4827]: I0126 09:09:46.924134 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:09:46 crc kubenswrapper[4827]: I0126 09:09:46.924179 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:09:46 crc kubenswrapper[4827]: I0126 09:09:46.964565 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.391823 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.391902 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.435246 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.547291 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.547326 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.586478 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.986208 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:09:47 crc kubenswrapper[4827]: I0126 09:09:47.998178 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.232115 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.256395 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.256527 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.307028 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.928385 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.946273 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vx7kr" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="registry-server" containerID="cri-o://666617bab3d2b52175add5027276b5047b0603fe7c3d61787ff23f22ad63b607" gracePeriod=2 Jan 26 09:09:49 crc kubenswrapper[4827]: I0126 09:09:49.985400 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.329492 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.329861 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.367940 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.724045 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.765527 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:50 crc kubenswrapper[4827]: I0126 09:09:50.992145 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.737262 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.959316 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerID="666617bab3d2b52175add5027276b5047b0603fe7c3d61787ff23f22ad63b607" exitCode=0 Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.960199 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerDied","Data":"666617bab3d2b52175add5027276b5047b0603fe7c3d61787ff23f22ad63b607"} Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.960263 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7kr" event={"ID":"ca645bb3-362d-489e-bb7b-59f9441ca7ec","Type":"ContainerDied","Data":"ff676ca8ec85d909bd64f1ffc3b7dca715537b29f9e19ea9c557c1cd8c0635c2"} Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.960290 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff676ca8ec85d909bd64f1ffc3b7dca715537b29f9e19ea9c557c1cd8c0635c2" Jan 26 09:09:51 crc kubenswrapper[4827]: I0126 09:09:51.961499 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.001127 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hrvn\" (UniqueName: \"kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn\") pod \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.001243 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities\") pod \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.002075 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities" (OuterVolumeSpecName: "utilities") pod "ca645bb3-362d-489e-bb7b-59f9441ca7ec" (UID: "ca645bb3-362d-489e-bb7b-59f9441ca7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.002306 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content\") pod \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\" (UID: \"ca645bb3-362d-489e-bb7b-59f9441ca7ec\") " Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.003376 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.012889 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn" (OuterVolumeSpecName: "kube-api-access-9hrvn") pod "ca645bb3-362d-489e-bb7b-59f9441ca7ec" (UID: "ca645bb3-362d-489e-bb7b-59f9441ca7ec"). InnerVolumeSpecName "kube-api-access-9hrvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.055367 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca645bb3-362d-489e-bb7b-59f9441ca7ec" (UID: "ca645bb3-362d-489e-bb7b-59f9441ca7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.104869 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca645bb3-362d-489e-bb7b-59f9441ca7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.104904 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hrvn\" (UniqueName: \"kubernetes.io/projected/ca645bb3-362d-489e-bb7b-59f9441ca7ec-kube-api-access-9hrvn\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.963560 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7kr" Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.964056 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s2hh2" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="registry-server" containerID="cri-o://b215a7f0628416056b2e1c5c3fb7f1a0c83060f673f97aab444ad03fa384849d" gracePeriod=2 Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.993858 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:09:52 crc kubenswrapper[4827]: I0126 09:09:52.995026 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vx7kr"] Jan 26 09:09:53 crc kubenswrapper[4827]: I0126 09:09:53.714721 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" path="/var/lib/kubelet/pods/ca645bb3-362d-489e-bb7b-59f9441ca7ec/volumes" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.329845 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.330615 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvg5n" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="registry-server" containerID="cri-o://aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b" gracePeriod=2 Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.724542 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.842445 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content\") pod \"3b937570-c0d6-42de-abf9-d4699d59139e\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.842533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities\") pod \"3b937570-c0d6-42de-abf9-d4699d59139e\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.842594 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nhct\" (UniqueName: \"kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct\") pod \"3b937570-c0d6-42de-abf9-d4699d59139e\" (UID: \"3b937570-c0d6-42de-abf9-d4699d59139e\") " Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.843764 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities" (OuterVolumeSpecName: "utilities") pod "3b937570-c0d6-42de-abf9-d4699d59139e" (UID: "3b937570-c0d6-42de-abf9-d4699d59139e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.845414 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.849529 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct" (OuterVolumeSpecName: "kube-api-access-2nhct") pod "3b937570-c0d6-42de-abf9-d4699d59139e" (UID: "3b937570-c0d6-42de-abf9-d4699d59139e"). InnerVolumeSpecName "kube-api-access-2nhct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.947210 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nhct\" (UniqueName: \"kubernetes.io/projected/3b937570-c0d6-42de-abf9-d4699d59139e-kube-api-access-2nhct\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.966124 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b937570-c0d6-42de-abf9-d4699d59139e" (UID: "3b937570-c0d6-42de-abf9-d4699d59139e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.985918 4827 generic.go:334] "Generic (PLEG): container finished" podID="82835169-3dce-4182-8104-c3b09cc8e11c" containerID="b215a7f0628416056b2e1c5c3fb7f1a0c83060f673f97aab444ad03fa384849d" exitCode=0 Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.985971 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerDied","Data":"b215a7f0628416056b2e1c5c3fb7f1a0c83060f673f97aab444ad03fa384849d"} Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.991524 4827 generic.go:334] "Generic (PLEG): container finished" podID="3b937570-c0d6-42de-abf9-d4699d59139e" containerID="aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b" exitCode=0 Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.991551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerDied","Data":"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b"} Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.991572 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvg5n" event={"ID":"3b937570-c0d6-42de-abf9-d4699d59139e","Type":"ContainerDied","Data":"986e1150f774b84d4156ecf3ba4ebf923ab540326fbc1bb87a012afdb6b1e000"} Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.991589 4827 scope.go:117] "RemoveContainer" containerID="aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b" Jan 26 09:09:54 crc kubenswrapper[4827]: I0126 09:09:54.991744 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvg5n" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.029514 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.029565 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvg5n"] Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.029931 4827 scope.go:117] "RemoveContainer" containerID="96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.053286 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b937570-c0d6-42de-abf9-d4699d59139e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.059843 4827 scope.go:117] "RemoveContainer" containerID="12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.079154 4827 scope.go:117] "RemoveContainer" containerID="aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b" Jan 26 09:09:55 crc kubenswrapper[4827]: E0126 09:09:55.079582 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b\": container with ID starting with aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b not found: ID does not exist" containerID="aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.079620 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b"} err="failed to get container status \"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b\": rpc error: code = NotFound desc = could not find container \"aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b\": container with ID starting with aa9096a5c7a1e101aabbfb503115cd44f1e48563d56e6c76e506ad28d376dc8b not found: ID does not exist" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.079676 4827 scope.go:117] "RemoveContainer" containerID="96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d" Jan 26 09:09:55 crc kubenswrapper[4827]: E0126 09:09:55.080020 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d\": container with ID starting with 96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d not found: ID does not exist" containerID="96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.080041 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d"} err="failed to get container status \"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d\": rpc error: code = NotFound desc = could not find container \"96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d\": container with ID starting with 96c427330386cf2a3432fa4562560313a70c2f10dfa42ba5103fe61193aa597d not found: ID does not exist" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.080054 4827 scope.go:117] "RemoveContainer" containerID="12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506" Jan 26 09:09:55 crc kubenswrapper[4827]: E0126 09:09:55.080361 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506\": container with ID starting with 12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506 not found: ID does not exist" containerID="12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.080377 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506"} err="failed to get container status \"12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506\": rpc error: code = NotFound desc = could not find container \"12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506\": container with ID starting with 12de13161a756020f94957f6bcea648184220332e4e9a3a36e014c8f60394506 not found: ID does not exist" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.148245 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.255527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities\") pod \"82835169-3dce-4182-8104-c3b09cc8e11c\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.255837 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qxw4\" (UniqueName: \"kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4\") pod \"82835169-3dce-4182-8104-c3b09cc8e11c\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.256028 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content\") pod \"82835169-3dce-4182-8104-c3b09cc8e11c\" (UID: \"82835169-3dce-4182-8104-c3b09cc8e11c\") " Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.256370 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities" (OuterVolumeSpecName: "utilities") pod "82835169-3dce-4182-8104-c3b09cc8e11c" (UID: "82835169-3dce-4182-8104-c3b09cc8e11c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.256554 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.258899 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4" (OuterVolumeSpecName: "kube-api-access-4qxw4") pod "82835169-3dce-4182-8104-c3b09cc8e11c" (UID: "82835169-3dce-4182-8104-c3b09cc8e11c"). InnerVolumeSpecName "kube-api-access-4qxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.288412 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82835169-3dce-4182-8104-c3b09cc8e11c" (UID: "82835169-3dce-4182-8104-c3b09cc8e11c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.357832 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82835169-3dce-4182-8104-c3b09cc8e11c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.357873 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qxw4\" (UniqueName: \"kubernetes.io/projected/82835169-3dce-4182-8104-c3b09cc8e11c-kube-api-access-4qxw4\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:55 crc kubenswrapper[4827]: I0126 09:09:55.712883 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" path="/var/lib/kubelet/pods/3b937570-c0d6-42de-abf9-d4699d59139e/volumes" Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.000500 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2hh2" event={"ID":"82835169-3dce-4182-8104-c3b09cc8e11c","Type":"ContainerDied","Data":"19906e92ea4bbd5b0b4844b4c4a4f9c37c4428c66658c20dd85ad9469ac507ff"} Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.000515 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2hh2" Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.000557 4827 scope.go:117] "RemoveContainer" containerID="b215a7f0628416056b2e1c5c3fb7f1a0c83060f673f97aab444ad03fa384849d" Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.022469 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.024925 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2hh2"] Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.025569 4827 scope.go:117] "RemoveContainer" containerID="74b823078ae921ba0bdf833acb5fc63f714dfa192e603a3808f5adc81b86b354" Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.038683 4827 scope.go:117] "RemoveContainer" containerID="7936ba4c155b33211c8b674be8c6ddcd0ca864b46fb3c130b5c3f2cf9dc095b1" Jan 26 09:09:56 crc kubenswrapper[4827]: I0126 09:09:56.967665 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.054785 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" containerID="cri-o://8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf" gracePeriod=15 Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.443493 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484318 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484362 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484392 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484417 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484432 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484501 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484526 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484545 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484595 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484609 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484680 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.484703 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error\") pod \"3d1327f0-1810-452b-a195-b40a94c96326\" (UID: \"3d1327f0-1810-452b-a195-b40a94c96326\") " Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.485715 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.485966 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.486783 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.487092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.487563 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.490081 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.491210 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5" (OuterVolumeSpecName: "kube-api-access-h4vc5") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "kube-api-access-h4vc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.491268 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.491890 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.492257 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.498071 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.498281 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.503634 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.504148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3d1327f0-1810-452b-a195-b40a94c96326" (UID: "3d1327f0-1810-452b-a195-b40a94c96326"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586208 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/3d1327f0-1810-452b-a195-b40a94c96326-kube-api-access-h4vc5\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586239 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586251 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586261 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586270 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586279 4827 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d1327f0-1810-452b-a195-b40a94c96326-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586287 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586295 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586303 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586313 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586322 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586329 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586338 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.586346 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3d1327f0-1810-452b-a195-b40a94c96326-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 09:09:57 crc kubenswrapper[4827]: I0126 09:09:57.715379 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" path="/var/lib/kubelet/pods/82835169-3dce-4182-8104-c3b09cc8e11c/volumes" Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.026227 4827 generic.go:334] "Generic (PLEG): container finished" podID="3d1327f0-1810-452b-a195-b40a94c96326" containerID="8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf" exitCode=0 Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.026314 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" event={"ID":"3d1327f0-1810-452b-a195-b40a94c96326","Type":"ContainerDied","Data":"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf"} Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.026362 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" event={"ID":"3d1327f0-1810-452b-a195-b40a94c96326","Type":"ContainerDied","Data":"4f5dc32ea2c857e66ebcba3e2506093defe2d09b1073d3eb2fc87d66f6070618"} Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.026392 4827 scope.go:117] "RemoveContainer" containerID="8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf" Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.026323 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkgr6" Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.054561 4827 scope.go:117] "RemoveContainer" containerID="8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf" Jan 26 09:09:58 crc kubenswrapper[4827]: E0126 09:09:58.055987 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf\": container with ID starting with 8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf not found: ID does not exist" containerID="8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf" Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.056031 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf"} err="failed to get container status \"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf\": rpc error: code = NotFound desc = could not find container \"8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf\": container with ID starting with 8021b5495edc5d8f5aea47e18d10e5e18c0c9dd500a7dc393a9565546b00e2cf not found: ID does not exist" Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.057329 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:09:58 crc kubenswrapper[4827]: I0126 09:09:58.063176 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkgr6"] Jan 26 09:09:59 crc kubenswrapper[4827]: I0126 09:09:59.712856 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1327f0-1810-452b-a195-b40a94c96326" path="/var/lib/kubelet/pods/3d1327f0-1810-452b-a195-b40a94c96326/volumes" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.715138 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b4b9565b9-npg65"] Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.716938 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717066 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.717158 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717238 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.717327 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717422 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.717505 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717587 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.717723 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717820 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.717909 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.717986 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.718068 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.718145 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.718230 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.718310 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.718390 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.718493 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.718578 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1809e415-966d-41dd-82db-fc3d84730df7" containerName="pruner" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.718680 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1809e415-966d-41dd-82db-fc3d84730df7" containerName="pruner" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.718983 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719068 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="extract-utilities" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.719151 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719224 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.719308 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719382 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: E0126 09:10:04.719462 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719551 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="extract-content" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719776 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="82835169-3dce-4182-8104-c3b09cc8e11c" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719872 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b937570-c0d6-42de-abf9-d4699d59139e" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.719948 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a713c948-1995-48a7-9c93-39b5e00934c0" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.720073 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1327f0-1810-452b-a195-b40a94c96326" containerName="oauth-openshift" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.720158 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca645bb3-362d-489e-bb7b-59f9441ca7ec" containerName="registry-server" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.720242 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1809e415-966d-41dd-82db-fc3d84730df7" containerName="pruner" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.720814 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.724455 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b4b9565b9-npg65"] Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.724758 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.724978 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.725075 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.725395 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.725539 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.733578 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.734152 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.734325 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.734599 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.734953 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.735294 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.739300 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.746365 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.755293 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.776669 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.793693 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.793936 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-dir\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.794092 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.794307 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-session\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.794495 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.794681 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.794886 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.795044 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-policies\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.795192 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.795394 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.795603 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-error\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.796118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.796173 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-login\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.796591 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684vh\" (UniqueName: \"kubernetes.io/projected/355ad6f0-ecb6-4677-a460-912d93265dc2-kube-api-access-684vh\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898188 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-session\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898242 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898275 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898332 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898358 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-policies\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898439 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-error\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898517 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898568 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-login\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898630 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684vh\" (UniqueName: \"kubernetes.io/projected/355ad6f0-ecb6-4677-a460-912d93265dc2-kube-api-access-684vh\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898708 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-dir\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.898756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.900024 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.900186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.900333 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.900617 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-dir\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.900960 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/355ad6f0-ecb6-4677-a460-912d93265dc2-audit-policies\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.903746 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.904185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.904365 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-login\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.904592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.905379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.906501 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-session\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.906510 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-user-template-error\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.915342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/355ad6f0-ecb6-4677-a460-912d93265dc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:04 crc kubenswrapper[4827]: I0126 09:10:04.916013 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684vh\" (UniqueName: \"kubernetes.io/projected/355ad6f0-ecb6-4677-a460-912d93265dc2-kube-api-access-684vh\") pod \"oauth-openshift-7b4b9565b9-npg65\" (UID: \"355ad6f0-ecb6-4677-a460-912d93265dc2\") " pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:05 crc kubenswrapper[4827]: I0126 09:10:05.063272 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:05 crc kubenswrapper[4827]: I0126 09:10:05.460887 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b4b9565b9-npg65"] Jan 26 09:10:05 crc kubenswrapper[4827]: W0126 09:10:05.466865 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355ad6f0_ecb6_4677_a460_912d93265dc2.slice/crio-4c793a7c8ed7126cc577c9016ba7a1189e3199ec8bef7bd5e218cd2186cd5363 WatchSource:0}: Error finding container 4c793a7c8ed7126cc577c9016ba7a1189e3199ec8bef7bd5e218cd2186cd5363: Status 404 returned error can't find the container with id 4c793a7c8ed7126cc577c9016ba7a1189e3199ec8bef7bd5e218cd2186cd5363 Jan 26 09:10:06 crc kubenswrapper[4827]: I0126 09:10:06.082555 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" event={"ID":"355ad6f0-ecb6-4677-a460-912d93265dc2","Type":"ContainerStarted","Data":"e31b60d4c5431d2fa3ac282892c2b12d97772d75ad04fe238b0ff96975f66b5a"} Jan 26 09:10:06 crc kubenswrapper[4827]: I0126 09:10:06.082884 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:06 crc kubenswrapper[4827]: I0126 09:10:06.082896 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" event={"ID":"355ad6f0-ecb6-4677-a460-912d93265dc2","Type":"ContainerStarted","Data":"4c793a7c8ed7126cc577c9016ba7a1189e3199ec8bef7bd5e218cd2186cd5363"} Jan 26 09:10:06 crc kubenswrapper[4827]: I0126 09:10:06.099198 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" Jan 26 09:10:06 crc kubenswrapper[4827]: I0126 09:10:06.110512 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b4b9565b9-npg65" podStartSLOduration=34.110487109 podStartE2EDuration="34.110487109s" podCreationTimestamp="2026-01-26 09:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:10:06.101774634 +0000 UTC m=+234.750446463" watchObservedRunningTime="2026-01-26 09:10:06.110487109 +0000 UTC m=+234.759158928" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.606183 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607252 4827 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607552 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513" gracePeriod=15 Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607664 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607703 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71" gracePeriod=15 Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607728 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114" gracePeriod=15 Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607765 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa" gracePeriod=15 Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.607964 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0" gracePeriod=15 Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.611520 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.611894 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.611916 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.611949 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.611961 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.611978 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.611990 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.612008 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612020 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.612041 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612054 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.612069 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612081 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.612135 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612146 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612315 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612333 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612355 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612373 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612389 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.612427 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.665309 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.667612 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.667727 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.667766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.668022 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.668096 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.668162 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.668227 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.668263 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.769883 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.769930 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.769947 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.769977 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770004 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770019 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770024 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770169 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770196 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770269 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770743 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770772 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770778 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770795 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770784 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.770813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: I0126 09:10:09.968571 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:09 crc kubenswrapper[4827]: W0126 09:10:09.991529 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8e6616f7f442e60b300ab0eee6c8efb9e880b699e76fbcacb0bda32cec8ed440 WatchSource:0}: Error finding container 8e6616f7f442e60b300ab0eee6c8efb9e880b699e76fbcacb0bda32cec8ed440: Status 404 returned error can't find the container with id 8e6616f7f442e60b300ab0eee6c8efb9e880b699e76fbcacb0bda32cec8ed440 Jan 26 09:10:09 crc kubenswrapper[4827]: E0126 09:10:09.995873 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e3cd9dbaa4415 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 09:10:09.994122261 +0000 UTC m=+238.642794080,LastTimestamp:2026-01-26 09:10:09.994122261 +0000 UTC m=+238.642794080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.107382 4827 generic.go:334] "Generic (PLEG): container finished" podID="949b8545-8ccd-45c6-942d-fccf65af803b" containerID="f7b69bc129be9031d7670a100396366de5664f03e5ad168c6f63e33a7fe7b19a" exitCode=0 Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.107441 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"949b8545-8ccd-45c6-942d-fccf65af803b","Type":"ContainerDied","Data":"f7b69bc129be9031d7670a100396366de5664f03e5ad168c6f63e33a7fe7b19a"} Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.108090 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.108339 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8e6616f7f442e60b300ab0eee6c8efb9e880b699e76fbcacb0bda32cec8ed440"} Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.108350 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.110944 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.111801 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.112630 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0" exitCode=0 Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.112675 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114" exitCode=0 Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.112682 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71" exitCode=0 Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.112690 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa" exitCode=2 Jan 26 09:10:10 crc kubenswrapper[4827]: I0126 09:10:10.112716 4827 scope.go:117] "RemoveContainer" containerID="eb9e843c249b106a2f5681129b400299923709d3ee4b8d655b143ab58d8c4d6d" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.121729 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008"} Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.122328 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.123169 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.124624 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.388179 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.388783 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.389219 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493201 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access\") pod \"949b8545-8ccd-45c6-942d-fccf65af803b\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493264 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir\") pod \"949b8545-8ccd-45c6-942d-fccf65af803b\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493297 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock\") pod \"949b8545-8ccd-45c6-942d-fccf65af803b\" (UID: \"949b8545-8ccd-45c6-942d-fccf65af803b\") " Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493405 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "949b8545-8ccd-45c6-942d-fccf65af803b" (UID: "949b8545-8ccd-45c6-942d-fccf65af803b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493474 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock" (OuterVolumeSpecName: "var-lock") pod "949b8545-8ccd-45c6-942d-fccf65af803b" (UID: "949b8545-8ccd-45c6-942d-fccf65af803b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493848 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.493892 4827 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/949b8545-8ccd-45c6-942d-fccf65af803b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.497933 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "949b8545-8ccd-45c6-942d-fccf65af803b" (UID: "949b8545-8ccd-45c6-942d-fccf65af803b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.594582 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949b8545-8ccd-45c6-942d-fccf65af803b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.740496 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:11 crc kubenswrapper[4827]: I0126 09:10:11.741132 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.031200 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.038901 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.039596 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.040136 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.040495 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103566 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103608 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103690 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103749 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103893 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.103933 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.104033 4827 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.104056 4827 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.104078 4827 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.136112 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"949b8545-8ccd-45c6-942d-fccf65af803b","Type":"ContainerDied","Data":"a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515"} Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.136193 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b6ba17d51c534917388f1aa1572eee9a9b130964813c8aa1e8a38ede3b6515" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.136136 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.140124 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.142422 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513" exitCode=0 Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.142488 4827 scope.go:117] "RemoveContainer" containerID="09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.142551 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.143225 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.143710 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.144023 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.160156 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.160674 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.161154 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.161957 4827 scope.go:117] "RemoveContainer" containerID="04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.174739 4827 scope.go:117] "RemoveContainer" containerID="93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.189697 4827 scope.go:117] "RemoveContainer" containerID="3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.208418 4827 scope.go:117] "RemoveContainer" containerID="77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.221421 4827 scope.go:117] "RemoveContainer" containerID="632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.234514 4827 scope.go:117] "RemoveContainer" containerID="09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.234961 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\": container with ID starting with 09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0 not found: ID does not exist" containerID="09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.234997 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0"} err="failed to get container status \"09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\": rpc error: code = NotFound desc = could not find container \"09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0\": container with ID starting with 09c3268395972cd029f0fb17d9448e4535c7d972a314dece3d6f79d648101cc0 not found: ID does not exist" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.235023 4827 scope.go:117] "RemoveContainer" containerID="04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.235409 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\": container with ID starting with 04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114 not found: ID does not exist" containerID="04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.235444 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114"} err="failed to get container status \"04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\": rpc error: code = NotFound desc = could not find container \"04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114\": container with ID starting with 04a3f921eafb5bbb0c862a67189474a271c57761af3e163372b0a336487ec114 not found: ID does not exist" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.235462 4827 scope.go:117] "RemoveContainer" containerID="93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.235699 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\": container with ID starting with 93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71 not found: ID does not exist" containerID="93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.235725 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71"} err="failed to get container status \"93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\": rpc error: code = NotFound desc = could not find container \"93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71\": container with ID starting with 93b4a5e8159af991f27bfe1366e8e093a5a9bd41041775b799166e389c3cab71 not found: ID does not exist" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.235744 4827 scope.go:117] "RemoveContainer" containerID="3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.235994 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\": container with ID starting with 3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa not found: ID does not exist" containerID="3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.236017 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa"} err="failed to get container status \"3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\": rpc error: code = NotFound desc = could not find container \"3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa\": container with ID starting with 3747d3381883bde6d0bd3da2e17a2acb135c71e69e93009e9612d189112eb9fa not found: ID does not exist" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.236033 4827 scope.go:117] "RemoveContainer" containerID="77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.236250 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\": container with ID starting with 77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513 not found: ID does not exist" containerID="77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.236269 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513"} err="failed to get container status \"77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\": rpc error: code = NotFound desc = could not find container \"77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513\": container with ID starting with 77f0aeb4ff8b522cd78c0d5c47259808df46e1000e700e9b03beb3866d857513 not found: ID does not exist" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.236283 4827 scope.go:117] "RemoveContainer" containerID="632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474" Jan 26 09:10:12 crc kubenswrapper[4827]: E0126 09:10:12.236485 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\": container with ID starting with 632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474 not found: ID does not exist" containerID="632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474" Jan 26 09:10:12 crc kubenswrapper[4827]: I0126 09:10:12.236508 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474"} err="failed to get container status \"632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\": rpc error: code = NotFound desc = could not find container \"632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474\": container with ID starting with 632053b9b462c710a88af57f0cfafc6825c9ce18451a2591e69712fe509fb474 not found: ID does not exist" Jan 26 09:10:13 crc kubenswrapper[4827]: I0126 09:10:13.713920 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.463001 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e3cd9dbaa4415 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 09:10:09.994122261 +0000 UTC m=+238.642794080,LastTimestamp:2026-01-26 09:10:09.994122261 +0000 UTC m=+238.642794080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.647244 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.647886 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.648225 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.648454 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.648883 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:14 crc kubenswrapper[4827]: I0126 09:10:14.648908 4827 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.649105 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Jan 26 09:10:14 crc kubenswrapper[4827]: E0126 09:10:14.849675 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Jan 26 09:10:15 crc kubenswrapper[4827]: E0126 09:10:15.250082 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Jan 26 09:10:16 crc kubenswrapper[4827]: E0126 09:10:16.050896 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Jan 26 09:10:17 crc kubenswrapper[4827]: E0126 09:10:17.651704 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="3.2s" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.702887 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.704454 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.704941 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.718737 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.718767 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:20 crc kubenswrapper[4827]: E0126 09:10:20.719126 4827 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:20 crc kubenswrapper[4827]: I0126 09:10:20.719683 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:20 crc kubenswrapper[4827]: E0126 09:10:20.852908 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="6.4s" Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.199988 4827 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bb6eb2f1a2b4adb6368c2da740da847b3c3f75e103032335edd27b98d19b45ed" exitCode=0 Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.200093 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bb6eb2f1a2b4adb6368c2da740da847b3c3f75e103032335edd27b98d19b45ed"} Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.200649 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"559a61c2da40160e73c00d009980d929febf191894ff0f64032dc7890dcdae8a"} Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.201030 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.201053 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:21 crc kubenswrapper[4827]: E0126 09:10:21.201495 4827 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.201507 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:21 crc kubenswrapper[4827]: I0126 09:10:21.202098 4827 status_manager.go:851] "Failed to get status for pod" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 26 09:10:22 crc kubenswrapper[4827]: I0126 09:10:22.207943 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e7928a97c3915a589fc284f2fd923cefe5d77601f07257547f25c1738e8951c1"} Jan 26 09:10:22 crc kubenswrapper[4827]: I0126 09:10:22.207984 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"153027a62737e6bfe6a8939b855696ac7a8059c7663f6bef1bcf3b60532d16e4"} Jan 26 09:10:22 crc kubenswrapper[4827]: I0126 09:10:22.207995 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d838c9c2d97f5c11d7ce982cbaa4c0ef0b23ad05244602fa245f4d601f1a0f93"} Jan 26 09:10:22 crc kubenswrapper[4827]: I0126 09:10:22.208004 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0df307dd4485f83eecdaada7f8bc861ac9adf703e12a1740bb661a371c997560"} Jan 26 09:10:23 crc kubenswrapper[4827]: I0126 09:10:23.216066 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c218d37e36f3f085fc1ecb59f9577f7d022c9323c13c0d36d6dcd03e37767b7b"} Jan 26 09:10:23 crc kubenswrapper[4827]: I0126 09:10:23.216665 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:23 crc kubenswrapper[4827]: I0126 09:10:23.216682 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:23 crc kubenswrapper[4827]: I0126 09:10:23.217025 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:24 crc kubenswrapper[4827]: I0126 09:10:24.238404 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 09:10:24 crc kubenswrapper[4827]: I0126 09:10:24.238450 4827 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b" exitCode=1 Jan 26 09:10:24 crc kubenswrapper[4827]: I0126 09:10:24.238478 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b"} Jan 26 09:10:24 crc kubenswrapper[4827]: I0126 09:10:24.238915 4827 scope.go:117] "RemoveContainer" containerID="7feabdcca241a94fdbe79c40fcf8b1eb3355c832642a09156f6dfbde27bff00b" Jan 26 09:10:25 crc kubenswrapper[4827]: I0126 09:10:25.247207 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 09:10:25 crc kubenswrapper[4827]: I0126 09:10:25.247518 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9b1c07aba5e14ac940b0646a5945a7ea4bc15a1c370fdb3fc287e8f96daf7761"} Jan 26 09:10:25 crc kubenswrapper[4827]: I0126 09:10:25.720053 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:25 crc kubenswrapper[4827]: I0126 09:10:25.720138 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:25 crc kubenswrapper[4827]: I0126 09:10:25.726092 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:28 crc kubenswrapper[4827]: I0126 09:10:28.270688 4827 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:28 crc kubenswrapper[4827]: I0126 09:10:28.272904 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad0e1abc-3aa3-4e25-a84b-5fbdba91852b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:10:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:10:21Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T09:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6eb2f1a2b4adb6368c2da740da847b3c3f75e103032335edd27b98d19b45ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6eb2f1a2b4adb6368c2da740da847b3c3f75e103032335edd27b98d19b45ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T09:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T09:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 26 09:10:28 crc kubenswrapper[4827]: I0126 09:10:28.322459 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ab4c0f6f-c90c-457c-b64e-a16fa3aa93ea" Jan 26 09:10:29 crc kubenswrapper[4827]: I0126 09:10:29.267786 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:29 crc kubenswrapper[4827]: I0126 09:10:29.268129 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad0e1abc-3aa3-4e25-a84b-5fbdba91852b" Jan 26 09:10:29 crc kubenswrapper[4827]: I0126 09:10:29.270841 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ab4c0f6f-c90c-457c-b64e-a16fa3aa93ea" Jan 26 09:10:29 crc kubenswrapper[4827]: I0126 09:10:29.537729 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:10:32 crc kubenswrapper[4827]: I0126 09:10:32.594678 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:10:32 crc kubenswrapper[4827]: I0126 09:10:32.600118 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:10:34 crc kubenswrapper[4827]: I0126 09:10:34.697134 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 09:10:35 crc kubenswrapper[4827]: I0126 09:10:35.260969 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 09:10:35 crc kubenswrapper[4827]: I0126 09:10:35.290911 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 09:10:35 crc kubenswrapper[4827]: I0126 09:10:35.583876 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 09:10:35 crc kubenswrapper[4827]: I0126 09:10:35.898352 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.199099 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.217788 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.404752 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.620835 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.824785 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 09:10:36 crc kubenswrapper[4827]: I0126 09:10:36.879751 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.219447 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.504588 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.528047 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.548102 4827 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.579990 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.607990 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 09:10:37 crc kubenswrapper[4827]: I0126 09:10:37.663692 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 09:10:38 crc kubenswrapper[4827]: I0126 09:10:38.520412 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 09:10:38 crc kubenswrapper[4827]: I0126 09:10:38.680099 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 09:10:38 crc kubenswrapper[4827]: I0126 09:10:38.899599 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 09:10:38 crc kubenswrapper[4827]: I0126 09:10:38.907404 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.026330 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.208163 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.307411 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.401739 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.540629 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.589799 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.851741 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 09:10:39 crc kubenswrapper[4827]: I0126 09:10:39.932274 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.090673 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.284776 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.395382 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.435486 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.633792 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.675704 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.759777 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.787255 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 09:10:40 crc kubenswrapper[4827]: I0126 09:10:40.796957 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.014534 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.023464 4827 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.026218 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=32.026202745 podStartE2EDuration="32.026202745s" podCreationTimestamp="2026-01-26 09:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:10:28.319331417 +0000 UTC m=+256.968003236" watchObservedRunningTime="2026-01-26 09:10:41.026202745 +0000 UTC m=+269.674874574" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.028681 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.028724 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.034966 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.040224 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.056123 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.056103439 podStartE2EDuration="13.056103439s" podCreationTimestamp="2026-01-26 09:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:10:41.053389423 +0000 UTC m=+269.702061242" watchObservedRunningTime="2026-01-26 09:10:41.056103439 +0000 UTC m=+269.704775268" Jan 26 09:10:41 crc kubenswrapper[4827]: I0126 09:10:41.353838 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.010827 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.013242 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.013468 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.452192 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.520007 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.722346 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.756150 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.880561 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 09:10:42 crc kubenswrapper[4827]: I0126 09:10:42.988597 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.103469 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.211009 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.291477 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.470164 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.539671 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.780835 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.832776 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.869301 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 09:10:43 crc kubenswrapper[4827]: I0126 09:10:43.917781 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.051496 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.151005 4827 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.214615 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.268447 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.272353 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.313120 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.390162 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.463503 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.477285 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.554899 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.575048 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.663062 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.783877 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.803438 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.829462 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.855594 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.910600 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.930215 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 09:10:44 crc kubenswrapper[4827]: I0126 09:10:44.977492 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.065505 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.104553 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.134182 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.250312 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.284908 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.345111 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.385596 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.410690 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.422343 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.472710 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.542011 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.554037 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.671522 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.849052 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.946742 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.959445 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 09:10:45 crc kubenswrapper[4827]: I0126 09:10:45.961340 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.043334 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.054107 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.087104 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.125665 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.156294 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.290522 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.314319 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.330734 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.394591 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.437679 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.449764 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.486465 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.503631 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.596058 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.654389 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.660081 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.746401 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 09:10:46 crc kubenswrapper[4827]: I0126 09:10:46.923851 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.006226 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.053035 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.152125 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.208902 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.316353 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.322416 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.374034 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.374462 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.442629 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.526891 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.597951 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.617582 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.756798 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.759183 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.855739 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.941952 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.957895 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 09:10:47 crc kubenswrapper[4827]: I0126 09:10:47.970346 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.011868 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.105629 4827 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.110716 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.151277 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.205054 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.329697 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.449442 4827 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.484030 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.487759 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.637052 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.668457 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.727523 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.799064 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.819344 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.875338 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.927487 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 09:10:48 crc kubenswrapper[4827]: I0126 09:10:48.945212 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.024171 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.097475 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.116804 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.198096 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.243607 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.269946 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.336010 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.371616 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.392672 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.417304 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.456388 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.481565 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.487256 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.488582 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.803800 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.812281 4827 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.812513 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008" gracePeriod=5 Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.827297 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.912255 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 09:10:49 crc kubenswrapper[4827]: I0126 09:10:49.999014 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.015504 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.101575 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.117085 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.150865 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.200453 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.232183 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.405341 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.448474 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.459568 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.575232 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.591478 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.602891 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.734252 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.747955 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.808480 4827 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.811008 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.893966 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.898338 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.936944 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 09:10:50 crc kubenswrapper[4827]: I0126 09:10:50.991485 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.020101 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.086387 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.099776 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.169468 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.179733 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.192666 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.345198 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.536035 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.588925 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.656921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.678287 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.684876 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.725330 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.751035 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.776614 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.790512 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.912260 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.935326 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.973774 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.985485 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 09:10:51 crc kubenswrapper[4827]: I0126 09:10:51.998306 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.160629 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.280584 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.502986 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.552081 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.583442 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.651654 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.937929 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 09:10:52 crc kubenswrapper[4827]: I0126 09:10:52.977558 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.042701 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.117057 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.160842 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.389500 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.390228 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.391182 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.738350 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.838566 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 09:10:53 crc kubenswrapper[4827]: I0126 09:10:53.851881 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.012047 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.034258 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.075684 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.167729 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.188847 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.204893 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.220264 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.271240 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.282276 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.335204 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.612389 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.742999 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.951617 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.951886 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989612 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989711 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989723 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989786 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989807 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989828 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989839 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989841 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.989939 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.990138 4827 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.990159 4827 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.990169 4827 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.990179 4827 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:54 crc kubenswrapper[4827]: I0126 09:10:54.998166 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.091828 4827 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.264871 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.353264 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.369491 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.388728 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.400960 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.401041 4827 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008" exitCode=137 Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.401082 4827 scope.go:117] "RemoveContainer" containerID="409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.401121 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.425656 4827 scope.go:117] "RemoveContainer" containerID="409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008" Jan 26 09:10:55 crc kubenswrapper[4827]: E0126 09:10:55.427735 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008\": container with ID starting with 409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008 not found: ID does not exist" containerID="409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.427770 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008"} err="failed to get container status \"409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008\": rpc error: code = NotFound desc = could not find container \"409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008\": container with ID starting with 409e276cc4e39846d1faa335c6e89ea8505dc2aad09b6505a0b2158d0a7ff008 not found: ID does not exist" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.632988 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.711575 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.712024 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.722609 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.723427 4827 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f7d03009-e6fe-4aeb-a92d-b3e2ae41521c" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.728844 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.728895 4827 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f7d03009-e6fe-4aeb-a92d-b3e2ae41521c" Jan 26 09:10:55 crc kubenswrapper[4827]: I0126 09:10:55.761821 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.010685 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.022781 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.094477 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.482013 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.762551 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.931083 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 09:10:56 crc kubenswrapper[4827]: I0126 09:10:56.990341 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 09:10:57 crc kubenswrapper[4827]: I0126 09:10:57.175480 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 09:10:57 crc kubenswrapper[4827]: I0126 09:10:57.624447 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 09:10:58 crc kubenswrapper[4827]: I0126 09:10:58.043493 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.483531 4827 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.503032 4827 generic.go:334] "Generic (PLEG): container finished" podID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerID="5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7" exitCode=0 Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.503077 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerDied","Data":"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7"} Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.503487 4827 scope.go:117] "RemoveContainer" containerID="5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7" Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.712121 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:11:11 crc kubenswrapper[4827]: I0126 09:11:11.712381 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:11:12 crc kubenswrapper[4827]: I0126 09:11:12.508102 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerStarted","Data":"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52"} Jan 26 09:11:12 crc kubenswrapper[4827]: I0126 09:11:12.508364 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:11:12 crc kubenswrapper[4827]: I0126 09:11:12.511430 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.502576 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.503129 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" podUID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" containerName="controller-manager" containerID="cri-o://83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68" gracePeriod=30 Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.626486 4827 generic.go:334] "Generic (PLEG): container finished" podID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" containerID="83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68" exitCode=0 Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.626556 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" event={"ID":"45cfc4e3-d32b-4e71-8038-89a9350cb87b","Type":"ContainerDied","Data":"83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68"} Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.645518 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:11:25 crc kubenswrapper[4827]: I0126 09:11:25.647084 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" podUID="7a9fd91f-a5e7-491f-9e75-1766cefac723" containerName="route-controller-manager" containerID="cri-o://6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896" gracePeriod=30 Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.017121 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.068308 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.123515 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert\") pod \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.123557 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca\") pod \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.123610 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles\") pod \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.123628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config\") pod \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.123670 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvzdg\" (UniqueName: \"kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg\") pod \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\" (UID: \"45cfc4e3-d32b-4e71-8038-89a9350cb87b\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.125144 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca" (OuterVolumeSpecName: "client-ca") pod "45cfc4e3-d32b-4e71-8038-89a9350cb87b" (UID: "45cfc4e3-d32b-4e71-8038-89a9350cb87b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.125630 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45cfc4e3-d32b-4e71-8038-89a9350cb87b" (UID: "45cfc4e3-d32b-4e71-8038-89a9350cb87b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.125957 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config" (OuterVolumeSpecName: "config") pod "45cfc4e3-d32b-4e71-8038-89a9350cb87b" (UID: "45cfc4e3-d32b-4e71-8038-89a9350cb87b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.130055 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg" (OuterVolumeSpecName: "kube-api-access-tvzdg") pod "45cfc4e3-d32b-4e71-8038-89a9350cb87b" (UID: "45cfc4e3-d32b-4e71-8038-89a9350cb87b"). InnerVolumeSpecName "kube-api-access-tvzdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.131228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45cfc4e3-d32b-4e71-8038-89a9350cb87b" (UID: "45cfc4e3-d32b-4e71-8038-89a9350cb87b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.224773 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert\") pod \"7a9fd91f-a5e7-491f-9e75-1766cefac723\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.224852 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca\") pod \"7a9fd91f-a5e7-491f-9e75-1766cefac723\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.224924 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config\") pod \"7a9fd91f-a5e7-491f-9e75-1766cefac723\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.224983 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fk6g\" (UniqueName: \"kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g\") pod \"7a9fd91f-a5e7-491f-9e75-1766cefac723\" (UID: \"7a9fd91f-a5e7-491f-9e75-1766cefac723\") " Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225180 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cfc4e3-d32b-4e71-8038-89a9350cb87b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225198 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225209 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225224 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cfc4e3-d32b-4e71-8038-89a9350cb87b-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225234 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvzdg\" (UniqueName: \"kubernetes.io/projected/45cfc4e3-d32b-4e71-8038-89a9350cb87b-kube-api-access-tvzdg\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225544 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a9fd91f-a5e7-491f-9e75-1766cefac723" (UID: "7a9fd91f-a5e7-491f-9e75-1766cefac723"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.225996 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config" (OuterVolumeSpecName: "config") pod "7a9fd91f-a5e7-491f-9e75-1766cefac723" (UID: "7a9fd91f-a5e7-491f-9e75-1766cefac723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.227913 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g" (OuterVolumeSpecName: "kube-api-access-7fk6g") pod "7a9fd91f-a5e7-491f-9e75-1766cefac723" (UID: "7a9fd91f-a5e7-491f-9e75-1766cefac723"). InnerVolumeSpecName "kube-api-access-7fk6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.228134 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a9fd91f-a5e7-491f-9e75-1766cefac723" (UID: "7a9fd91f-a5e7-491f-9e75-1766cefac723"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.325827 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9fd91f-a5e7-491f-9e75-1766cefac723-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.325871 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.325880 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9fd91f-a5e7-491f-9e75-1766cefac723-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.325889 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fk6g\" (UniqueName: \"kubernetes.io/projected/7a9fd91f-a5e7-491f-9e75-1766cefac723-kube-api-access-7fk6g\") on node \"crc\" DevicePath \"\"" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.633064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" event={"ID":"45cfc4e3-d32b-4e71-8038-89a9350cb87b","Type":"ContainerDied","Data":"c09b8a59c4d02fcf70adbf5ac4603b2b6c971f1a553c8dad198180a9c0e7468a"} Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.633117 4827 scope.go:117] "RemoveContainer" containerID="83f80464f08aa90e3ad02a30fac294272b3d3eebff73ba9212c9137e91d0fe68" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.633206 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jdttz" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.637606 4827 generic.go:334] "Generic (PLEG): container finished" podID="7a9fd91f-a5e7-491f-9e75-1766cefac723" containerID="6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896" exitCode=0 Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.637661 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" event={"ID":"7a9fd91f-a5e7-491f-9e75-1766cefac723","Type":"ContainerDied","Data":"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896"} Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.637692 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" event={"ID":"7a9fd91f-a5e7-491f-9e75-1766cefac723","Type":"ContainerDied","Data":"5d8a8b8d0043ff4565021b3680da2d762dd88c5a96793444b4bae1618567e6fb"} Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.637748 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.651186 4827 scope.go:117] "RemoveContainer" containerID="6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.668123 4827 scope.go:117] "RemoveContainer" containerID="6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.668213 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:11:26 crc kubenswrapper[4827]: E0126 09:11:26.668484 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896\": container with ID starting with 6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896 not found: ID does not exist" containerID="6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.668513 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896"} err="failed to get container status \"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896\": rpc error: code = NotFound desc = could not find container \"6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896\": container with ID starting with 6126938f509617ebcafe2258a1c9d02a77e1c7829a795f15dfdad02dd1ea1896 not found: ID does not exist" Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.685835 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jdttz"] Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.689492 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:11:26 crc kubenswrapper[4827]: I0126 09:11:26.692497 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sbrrs"] Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.712145 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" path="/var/lib/kubelet/pods/45cfc4e3-d32b-4e71-8038-89a9350cb87b/volumes" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.713377 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9fd91f-a5e7-491f-9e75-1766cefac723" path="/var/lib/kubelet/pods/7a9fd91f-a5e7-491f-9e75-1766cefac723/volumes" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.769201 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:11:27 crc kubenswrapper[4827]: E0126 09:11:27.769761 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.769785 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 09:11:27 crc kubenswrapper[4827]: E0126 09:11:27.769799 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" containerName="controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.769807 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" containerName="controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: E0126 09:11:27.769864 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" containerName="installer" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.769874 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" containerName="installer" Jan 26 09:11:27 crc kubenswrapper[4827]: E0126 09:11:27.769892 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9fd91f-a5e7-491f-9e75-1766cefac723" containerName="route-controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771085 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9fd91f-a5e7-491f-9e75-1766cefac723" containerName="route-controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771189 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cfc4e3-d32b-4e71-8038-89a9350cb87b" containerName="controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771200 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9fd91f-a5e7-491f-9e75-1766cefac723" containerName="route-controller-manager" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771214 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771224 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="949b8545-8ccd-45c6-942d-fccf65af803b" containerName="installer" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.771620 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.773791 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.774235 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.774263 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.774610 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.774832 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.775145 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.777631 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.778265 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.783931 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.784272 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.784272 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.784769 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.785144 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.785514 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.786095 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.788572 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.798851 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.843994 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.844061 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.844127 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.844174 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff77n\" (UniqueName: \"kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945118 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945164 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945240 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945278 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945409 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945451 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945469 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw66x\" (UniqueName: \"kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.945516 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff77n\" (UniqueName: \"kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.946088 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.946479 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.955002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:27 crc kubenswrapper[4827]: I0126 09:11:27.967268 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff77n\" (UniqueName: \"kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n\") pod \"route-controller-manager-66d9b996-rwflp\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.046838 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.046883 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.046907 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw66x\" (UniqueName: \"kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.046973 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.047012 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.047858 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.048188 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.048328 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.050510 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.062293 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw66x\" (UniqueName: \"kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x\") pod \"controller-manager-77d68bfdb-7tp5j\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.097499 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.107956 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.308056 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.336944 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:11:28 crc kubenswrapper[4827]: W0126 09:11:28.341877 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8567781c_9f3a_4de8_a74a_fca92c156fc8.slice/crio-2ae17c8c4e6f0daad7c176e1f89ce85bc5a91e760a7f032406a14d4a437c0992 WatchSource:0}: Error finding container 2ae17c8c4e6f0daad7c176e1f89ce85bc5a91e760a7f032406a14d4a437c0992: Status 404 returned error can't find the container with id 2ae17c8c4e6f0daad7c176e1f89ce85bc5a91e760a7f032406a14d4a437c0992 Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.649575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" event={"ID":"223c2179-2f20-4d48-83b8-4753a9f9d2c1","Type":"ContainerStarted","Data":"6eb461c3ada568333e70cf97b1e56e077601a6b5a2bb7d02093e688f57805fa5"} Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.649886 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" event={"ID":"223c2179-2f20-4d48-83b8-4753a9f9d2c1","Type":"ContainerStarted","Data":"4b11a4f5ff6c7b229a6fb235c8836ec5b28c392417cefc6be67389324d8e9430"} Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.650257 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.652282 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" event={"ID":"8567781c-9f3a-4de8-a74a-fca92c156fc8","Type":"ContainerStarted","Data":"268827b1e9e4e98935a4dd1348e6efac7f237877b215522d07cb5105d0fe0a3f"} Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.652323 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" event={"ID":"8567781c-9f3a-4de8-a74a-fca92c156fc8","Type":"ContainerStarted","Data":"2ae17c8c4e6f0daad7c176e1f89ce85bc5a91e760a7f032406a14d4a437c0992"} Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.652620 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.671814 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" podStartSLOduration=3.6717940860000002 podStartE2EDuration="3.671794086s" podCreationTimestamp="2026-01-26 09:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:11:28.66908066 +0000 UTC m=+317.317752489" watchObservedRunningTime="2026-01-26 09:11:28.671794086 +0000 UTC m=+317.320465905" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.696129 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" podStartSLOduration=3.696108644 podStartE2EDuration="3.696108644s" podCreationTimestamp="2026-01-26 09:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:11:28.691348431 +0000 UTC m=+317.340020260" watchObservedRunningTime="2026-01-26 09:11:28.696108644 +0000 UTC m=+317.344780463" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.708790 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:11:28 crc kubenswrapper[4827]: I0126 09:11:28.901567 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:12:12 crc kubenswrapper[4827]: I0126 09:12:12.268761 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:12:12 crc kubenswrapper[4827]: I0126 09:12:12.269357 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.034202 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.035273 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tl6c2" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="registry-server" containerID="cri-o://1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e" gracePeriod=30 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.039414 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.039710 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rv4tx" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="registry-server" containerID="cri-o://096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c" gracePeriod=30 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.045441 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.045681 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" containerID="cri-o://5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52" gracePeriod=30 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.073751 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.074036 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8fr9c" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="registry-server" containerID="cri-o://64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de" gracePeriod=30 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.088698 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hnfv7"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.089415 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.092589 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.092854 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tmph" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="registry-server" containerID="cri-o://444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821" gracePeriod=30 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.112002 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hnfv7"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.178450 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.178510 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.178547 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tc7\" (UniqueName: \"kubernetes.io/projected/164c8367-04d2-44e4-b127-fe8b2a6b62e8-kube-api-access-s6tc7\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.280207 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.280264 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tc7\" (UniqueName: \"kubernetes.io/projected/164c8367-04d2-44e4-b127-fe8b2a6b62e8-kube-api-access-s6tc7\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.280327 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.281962 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.295788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/164c8367-04d2-44e4-b127-fe8b2a6b62e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.302554 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tc7\" (UniqueName: \"kubernetes.io/projected/164c8367-04d2-44e4-b127-fe8b2a6b62e8-kube-api-access-s6tc7\") pod \"marketplace-operator-79b997595-hnfv7\" (UID: \"164c8367-04d2-44e4-b127-fe8b2a6b62e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: E0126 09:12:16.399680 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19386a1_51f4_4396_b49d_4ee6974c1126.slice/crio-conmon-1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1899b44_9f9b_4212_ab42_01ffbb3bc5d7.slice/crio-444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1899b44_9f9b_4212_ab42_01ffbb3bc5d7.slice/crio-conmon-444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.410010 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.562106 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.688949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca\") pod \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.689178 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrnq\" (UniqueName: \"kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq\") pod \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.689234 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics\") pod \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\" (UID: \"e022fa35-5182-4d6b-b6f3-e05822ac8ee9\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.690911 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e022fa35-5182-4d6b-b6f3-e05822ac8ee9" (UID: "e022fa35-5182-4d6b-b6f3-e05822ac8ee9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.696415 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e022fa35-5182-4d6b-b6f3-e05822ac8ee9" (UID: "e022fa35-5182-4d6b-b6f3-e05822ac8ee9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.704372 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq" (OuterVolumeSpecName: "kube-api-access-ztrnq") pod "e022fa35-5182-4d6b-b6f3-e05822ac8ee9" (UID: "e022fa35-5182-4d6b-b6f3-e05822ac8ee9"). InnerVolumeSpecName "kube-api-access-ztrnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.746976 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.762088 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.790313 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.790349 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrnq\" (UniqueName: \"kubernetes.io/projected/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-kube-api-access-ztrnq\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.790358 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e022fa35-5182-4d6b-b6f3-e05822ac8ee9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.859485 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.860388 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894151 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities\") pod \"f19386a1-51f4-4396-b49d-4ee6974c1126\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894223 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content\") pod \"f19386a1-51f4-4396-b49d-4ee6974c1126\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894290 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mdh\" (UniqueName: \"kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh\") pod \"f19386a1-51f4-4396-b49d-4ee6974c1126\" (UID: \"f19386a1-51f4-4396-b49d-4ee6974c1126\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894326 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content\") pod \"32023ace-27de-4377-9cfb-27c706ef9205\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894376 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwlhp\" (UniqueName: \"kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp\") pod \"32023ace-27de-4377-9cfb-27c706ef9205\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.894398 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities\") pod \"32023ace-27de-4377-9cfb-27c706ef9205\" (UID: \"32023ace-27de-4377-9cfb-27c706ef9205\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.895123 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities" (OuterVolumeSpecName: "utilities") pod "32023ace-27de-4377-9cfb-27c706ef9205" (UID: "32023ace-27de-4377-9cfb-27c706ef9205"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.895597 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities" (OuterVolumeSpecName: "utilities") pod "f19386a1-51f4-4396-b49d-4ee6974c1126" (UID: "f19386a1-51f4-4396-b49d-4ee6974c1126"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.898837 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh" (OuterVolumeSpecName: "kube-api-access-46mdh") pod "f19386a1-51f4-4396-b49d-4ee6974c1126" (UID: "f19386a1-51f4-4396-b49d-4ee6974c1126"). InnerVolumeSpecName "kube-api-access-46mdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.899660 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp" (OuterVolumeSpecName: "kube-api-access-bwlhp") pod "32023ace-27de-4377-9cfb-27c706ef9205" (UID: "32023ace-27de-4377-9cfb-27c706ef9205"). InnerVolumeSpecName "kube-api-access-bwlhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.920442 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32023ace-27de-4377-9cfb-27c706ef9205" (UID: "32023ace-27de-4377-9cfb-27c706ef9205"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.928196 4827 generic.go:334] "Generic (PLEG): container finished" podID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerID="096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c" exitCode=0 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.928242 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerDied","Data":"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.928268 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv4tx" event={"ID":"3faf08fa-1553-4b39-b2f3-63f4b2985f4f","Type":"ContainerDied","Data":"687e43433bd23342308129e66a4779c9be4b3dd630242b3777473623f7f3632f"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.928283 4827 scope.go:117] "RemoveContainer" containerID="096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.928371 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv4tx" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.940818 4827 generic.go:334] "Generic (PLEG): container finished" podID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerID="5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52" exitCode=0 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.940884 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerDied","Data":"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.940903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" event={"ID":"e022fa35-5182-4d6b-b6f3-e05822ac8ee9","Type":"ContainerDied","Data":"0f373ed3e6d411a8a0f32b998c69dc9b1bc117056d9a290e02f50851f010da95"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.940947 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsztb" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.946285 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f19386a1-51f4-4396-b49d-4ee6974c1126" (UID: "f19386a1-51f4-4396-b49d-4ee6974c1126"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.947821 4827 generic.go:334] "Generic (PLEG): container finished" podID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerID="444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821" exitCode=0 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.947879 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerDied","Data":"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.947889 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tmph" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.947903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tmph" event={"ID":"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7","Type":"ContainerDied","Data":"fe038d737b09d4b223b5ad4af166bc44c003c18299bd3434176092929ed018d3"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.953585 4827 generic.go:334] "Generic (PLEG): container finished" podID="32023ace-27de-4377-9cfb-27c706ef9205" containerID="64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de" exitCode=0 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.953674 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerDied","Data":"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.953704 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fr9c" event={"ID":"32023ace-27de-4377-9cfb-27c706ef9205","Type":"ContainerDied","Data":"51ce757ae6c9980d40d8c9d6cb061989411b239324cefd6122db22e0cdd93138"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.953779 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fr9c" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.954079 4827 scope.go:117] "RemoveContainer" containerID="c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.959427 4827 generic.go:334] "Generic (PLEG): container finished" podID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerID="1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e" exitCode=0 Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.959461 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerDied","Data":"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.959485 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tl6c2" event={"ID":"f19386a1-51f4-4396-b49d-4ee6974c1126","Type":"ContainerDied","Data":"aae71ce3fa9bd19b31c88225b4c79df51dcd52d822e6a74f05a0b2ca4f3dd330"} Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.959535 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tl6c2" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.992982 4827 scope.go:117] "RemoveContainer" containerID="71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.995846 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities\") pod \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.995904 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gwdc\" (UniqueName: \"kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc\") pod \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.995967 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities\") pod \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.995994 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsjj7\" (UniqueName: \"kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7\") pod \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.996060 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content\") pod \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\" (UID: \"3faf08fa-1553-4b39-b2f3-63f4b2985f4f\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.996093 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content\") pod \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\" (UID: \"b1899b44-9f9b-4212-ab42-01ffbb3bc5d7\") " Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.996197 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.996973 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mdh\" (UniqueName: \"kubernetes.io/projected/f19386a1-51f4-4396-b49d-4ee6974c1126-kube-api-access-46mdh\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.996995 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.997004 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwlhp\" (UniqueName: \"kubernetes.io/projected/32023ace-27de-4377-9cfb-27c706ef9205-kube-api-access-bwlhp\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.997013 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32023ace-27de-4377-9cfb-27c706ef9205-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.997022 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.997030 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19386a1-51f4-4396-b49d-4ee6974c1126-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.997417 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities" (OuterVolumeSpecName: "utilities") pod "3faf08fa-1553-4b39-b2f3-63f4b2985f4f" (UID: "3faf08fa-1553-4b39-b2f3-63f4b2985f4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:16 crc kubenswrapper[4827]: I0126 09:12:16.998111 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities" (OuterVolumeSpecName: "utilities") pod "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" (UID: "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.007097 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7" (OuterVolumeSpecName: "kube-api-access-jsjj7") pod "3faf08fa-1553-4b39-b2f3-63f4b2985f4f" (UID: "3faf08fa-1553-4b39-b2f3-63f4b2985f4f"). InnerVolumeSpecName "kube-api-access-jsjj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.011879 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc" (OuterVolumeSpecName: "kube-api-access-5gwdc") pod "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" (UID: "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7"). InnerVolumeSpecName "kube-api-access-5gwdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.014675 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsztb"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.025109 4827 scope.go:117] "RemoveContainer" containerID="096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.027754 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c\": container with ID starting with 096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c not found: ID does not exist" containerID="096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.027812 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c"} err="failed to get container status \"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c\": rpc error: code = NotFound desc = could not find container \"096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c\": container with ID starting with 096bb8699b5ed691d2aac4254b08c29f9af44f58792518645d1a3f63dbece26c not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.027845 4827 scope.go:117] "RemoveContainer" containerID="c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.030489 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6\": container with ID starting with c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6 not found: ID does not exist" containerID="c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.030680 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6"} err="failed to get container status \"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6\": rpc error: code = NotFound desc = could not find container \"c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6\": container with ID starting with c910088af9832ec21b4a86e5cf63a63dd40555b34b6ed10b99878ffdd2dc06e6 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.030779 4827 scope.go:117] "RemoveContainer" containerID="71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.033516 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.034254 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3\": container with ID starting with 71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3 not found: ID does not exist" containerID="71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.034512 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3"} err="failed to get container status \"71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3\": rpc error: code = NotFound desc = could not find container \"71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3\": container with ID starting with 71081d0bc5ab80ae32122be66797e000bd3ac03a79402e9c4216de97457f81a3 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.034544 4827 scope.go:117] "RemoveContainer" containerID="5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.042054 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tl6c2"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.052437 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.052787 4827 scope.go:117] "RemoveContainer" containerID="5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.065077 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fr9c"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.068340 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hnfv7"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.069796 4827 scope.go:117] "RemoveContainer" containerID="5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.070064 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52\": container with ID starting with 5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52 not found: ID does not exist" containerID="5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.070092 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52"} err="failed to get container status \"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52\": rpc error: code = NotFound desc = could not find container \"5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52\": container with ID starting with 5cd10e0d0d54ea00157c2002ad22f6a0aa234850d8aaedebbfd562173bbfdc52 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.070114 4827 scope.go:117] "RemoveContainer" containerID="5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.070337 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7\": container with ID starting with 5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7 not found: ID does not exist" containerID="5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.070367 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7"} err="failed to get container status \"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7\": rpc error: code = NotFound desc = could not find container \"5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7\": container with ID starting with 5780bb9d3f08992dfa9036048893ee564c518fe1f61ee2c40135de2b12eea0a7 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.070380 4827 scope.go:117] "RemoveContainer" containerID="444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.078542 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3faf08fa-1553-4b39-b2f3-63f4b2985f4f" (UID: "3faf08fa-1553-4b39-b2f3-63f4b2985f4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.091326 4827 scope.go:117] "RemoveContainer" containerID="bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.098572 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.098596 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.098608 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gwdc\" (UniqueName: \"kubernetes.io/projected/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-kube-api-access-5gwdc\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.098618 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.098873 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsjj7\" (UniqueName: \"kubernetes.io/projected/3faf08fa-1553-4b39-b2f3-63f4b2985f4f-kube-api-access-jsjj7\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.110979 4827 scope.go:117] "RemoveContainer" containerID="f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.138455 4827 scope.go:117] "RemoveContainer" containerID="444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.138905 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821\": container with ID starting with 444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821 not found: ID does not exist" containerID="444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.138950 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821"} err="failed to get container status \"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821\": rpc error: code = NotFound desc = could not find container \"444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821\": container with ID starting with 444a86d78157b29efe6e81bb8921f020038f6f143de997d2c22c9b971419e821 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.138980 4827 scope.go:117] "RemoveContainer" containerID="bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.139268 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a\": container with ID starting with bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a not found: ID does not exist" containerID="bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.139310 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a"} err="failed to get container status \"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a\": rpc error: code = NotFound desc = could not find container \"bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a\": container with ID starting with bf9b91c11c001c2863af7c50bb494bb78b33f6155db86170431d365059dc086a not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.139338 4827 scope.go:117] "RemoveContainer" containerID="f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.139589 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635\": container with ID starting with f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635 not found: ID does not exist" containerID="f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.139619 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635"} err="failed to get container status \"f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635\": rpc error: code = NotFound desc = could not find container \"f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635\": container with ID starting with f09daf2c8e331abd7d9b38d81fd4e107814df79a62d12fe05227283d84d60635 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.139649 4827 scope.go:117] "RemoveContainer" containerID="64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.157497 4827 scope.go:117] "RemoveContainer" containerID="4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.186271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" (UID: "b1899b44-9f9b-4212-ab42-01ffbb3bc5d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.200199 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.222546 4827 scope.go:117] "RemoveContainer" containerID="62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.257277 4827 scope.go:117] "RemoveContainer" containerID="64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.259756 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de\": container with ID starting with 64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de not found: ID does not exist" containerID="64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.259816 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de"} err="failed to get container status \"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de\": rpc error: code = NotFound desc = could not find container \"64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de\": container with ID starting with 64a3e7b2442a758e869d873f1c8dfb7aa0af586dfc348d98862e6cf005bcd0de not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.259843 4827 scope.go:117] "RemoveContainer" containerID="4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.260170 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986\": container with ID starting with 4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986 not found: ID does not exist" containerID="4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.260192 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986"} err="failed to get container status \"4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986\": rpc error: code = NotFound desc = could not find container \"4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986\": container with ID starting with 4815b33963e4d136a034dcaaa7919c121a11d9b38224815cb28a0c6b60f4c986 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.260207 4827 scope.go:117] "RemoveContainer" containerID="62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.260474 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd\": container with ID starting with 62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd not found: ID does not exist" containerID="62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.260501 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd"} err="failed to get container status \"62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd\": rpc error: code = NotFound desc = could not find container \"62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd\": container with ID starting with 62ff2675b037f75537909f17863ec863c0a4d6f7a68e29744a4869b28de1a0dd not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.260517 4827 scope.go:117] "RemoveContainer" containerID="1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.281349 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.282205 4827 scope.go:117] "RemoveContainer" containerID="f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.291478 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rv4tx"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.304594 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.306267 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tmph"] Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.306334 4827 scope.go:117] "RemoveContainer" containerID="791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.318836 4827 scope.go:117] "RemoveContainer" containerID="1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.319228 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e\": container with ID starting with 1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e not found: ID does not exist" containerID="1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.319257 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e"} err="failed to get container status \"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e\": rpc error: code = NotFound desc = could not find container \"1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e\": container with ID starting with 1f84efcbf024cc0ad4b292cad4cb7a2dfc070c777e37af71da7b5bb4e03a487e not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.319277 4827 scope.go:117] "RemoveContainer" containerID="f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.319675 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373\": container with ID starting with f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373 not found: ID does not exist" containerID="f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.319722 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373"} err="failed to get container status \"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373\": rpc error: code = NotFound desc = could not find container \"f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373\": container with ID starting with f7b15fc6218bd7eba2796419ec5ebd3f9dd2d3f793d9a46141efb4fbf896c373 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.319762 4827 scope.go:117] "RemoveContainer" containerID="791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90" Jan 26 09:12:17 crc kubenswrapper[4827]: E0126 09:12:17.319965 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90\": container with ID starting with 791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90 not found: ID does not exist" containerID="791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.319994 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90"} err="failed to get container status \"791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90\": rpc error: code = NotFound desc = could not find container \"791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90\": container with ID starting with 791623008edb2f9a0a6e64439e70cf54f5739890a697248fe338188828c57d90 not found: ID does not exist" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.708204 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32023ace-27de-4377-9cfb-27c706ef9205" path="/var/lib/kubelet/pods/32023ace-27de-4377-9cfb-27c706ef9205/volumes" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.708878 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" path="/var/lib/kubelet/pods/3faf08fa-1553-4b39-b2f3-63f4b2985f4f/volumes" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.709580 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" path="/var/lib/kubelet/pods/b1899b44-9f9b-4212-ab42-01ffbb3bc5d7/volumes" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.710817 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" path="/var/lib/kubelet/pods/e022fa35-5182-4d6b-b6f3-e05822ac8ee9/volumes" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.711350 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" path="/var/lib/kubelet/pods/f19386a1-51f4-4396-b49d-4ee6974c1126/volumes" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.971103 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" event={"ID":"164c8367-04d2-44e4-b127-fe8b2a6b62e8","Type":"ContainerStarted","Data":"9976573dbeb0d4920a36cb8631d81ec5dbb8796308a2aa44a3c19bc947643d88"} Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.971171 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" event={"ID":"164c8367-04d2-44e4-b127-fe8b2a6b62e8","Type":"ContainerStarted","Data":"3a3782af915a632038d905287be41d381bd9062d49f7abbdc2ec5dbeb64e6ab8"} Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.971769 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.985010 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" Jan 26 09:12:17 crc kubenswrapper[4827]: I0126 09:12:17.996455 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hnfv7" podStartSLOduration=1.996430113 podStartE2EDuration="1.996430113s" podCreationTimestamp="2026-01-26 09:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:12:17.991499493 +0000 UTC m=+366.640171312" watchObservedRunningTime="2026-01-26 09:12:17.996430113 +0000 UTC m=+366.645101932" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.239922 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fngdm"] Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240401 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240412 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240422 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240427 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240439 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240445 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240454 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240459 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240469 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240474 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240483 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240488 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240496 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240502 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240511 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240517 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240527 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240533 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="extract-utilities" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240541 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240547 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240553 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240559 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240566 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240571 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240579 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240585 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="extract-content" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240686 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="32023ace-27de-4377-9cfb-27c706ef9205" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240695 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240703 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3faf08fa-1553-4b39-b2f3-63f4b2985f4f" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240712 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19386a1-51f4-4396-b49d-4ee6974c1126" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240719 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1899b44-9f9b-4212-ab42-01ffbb3bc5d7" containerName="registry-server" Jan 26 09:12:18 crc kubenswrapper[4827]: E0126 09:12:18.240798 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240805 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.240886 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e022fa35-5182-4d6b-b6f3-e05822ac8ee9" containerName="marketplace-operator" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.241392 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.245394 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.249379 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fngdm"] Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.312158 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxpfd\" (UniqueName: \"kubernetes.io/projected/14e1c005-d10a-430f-881a-a222cd695727-kube-api-access-zxpfd\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.312230 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-utilities\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.312334 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-catalog-content\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.413713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxpfd\" (UniqueName: \"kubernetes.io/projected/14e1c005-d10a-430f-881a-a222cd695727-kube-api-access-zxpfd\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.413816 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-utilities\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.413849 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-catalog-content\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.414265 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-utilities\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.414330 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e1c005-d10a-430f-881a-a222cd695727-catalog-content\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.442960 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxpfd\" (UniqueName: \"kubernetes.io/projected/14e1c005-d10a-430f-881a-a222cd695727-kube-api-access-zxpfd\") pod \"redhat-marketplace-fngdm\" (UID: \"14e1c005-d10a-430f-881a-a222cd695727\") " pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.448436 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9f9b"] Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.449386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.452144 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.458805 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f9b"] Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.515371 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgkv\" (UniqueName: \"kubernetes.io/projected/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-kube-api-access-hzgkv\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.515793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-catalog-content\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.515904 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-utilities\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.560818 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.617456 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgkv\" (UniqueName: \"kubernetes.io/projected/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-kube-api-access-hzgkv\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.617520 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-catalog-content\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.617542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-utilities\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.617948 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-utilities\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.618393 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-catalog-content\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.638418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgkv\" (UniqueName: \"kubernetes.io/projected/ef155af2-e9c9-45d6-8ea9-19ca71f455d1-kube-api-access-hzgkv\") pod \"redhat-operators-z9f9b\" (UID: \"ef155af2-e9c9-45d6-8ea9-19ca71f455d1\") " pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.773593 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.782755 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fngdm"] Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.979779 4827 generic.go:334] "Generic (PLEG): container finished" podID="14e1c005-d10a-430f-881a-a222cd695727" containerID="f1f2d880233522c4ebf801a737c4ec370fb8517f400c3e01246c82ed79d969e3" exitCode=0 Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.980997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fngdm" event={"ID":"14e1c005-d10a-430f-881a-a222cd695727","Type":"ContainerDied","Data":"f1f2d880233522c4ebf801a737c4ec370fb8517f400c3e01246c82ed79d969e3"} Jan 26 09:12:18 crc kubenswrapper[4827]: I0126 09:12:18.981027 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fngdm" event={"ID":"14e1c005-d10a-430f-881a-a222cd695727","Type":"ContainerStarted","Data":"6a922b62b4c301e976da51a10b938736bff2852b1a85957951544a6b3e9141b6"} Jan 26 09:12:19 crc kubenswrapper[4827]: I0126 09:12:19.011920 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f9b"] Jan 26 09:12:19 crc kubenswrapper[4827]: W0126 09:12:19.020658 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef155af2_e9c9_45d6_8ea9_19ca71f455d1.slice/crio-7d4f65ba740e5525a61fd4164287b74a97d69156a87c32df41a931aaa59dc2e7 WatchSource:0}: Error finding container 7d4f65ba740e5525a61fd4164287b74a97d69156a87c32df41a931aaa59dc2e7: Status 404 returned error can't find the container with id 7d4f65ba740e5525a61fd4164287b74a97d69156a87c32df41a931aaa59dc2e7 Jan 26 09:12:19 crc kubenswrapper[4827]: I0126 09:12:19.986830 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fngdm" event={"ID":"14e1c005-d10a-430f-881a-a222cd695727","Type":"ContainerStarted","Data":"b2a9cc7e9b6760d77a15526967992dc00b6a56f7f90f69781ba7434e66b8f3b0"} Jan 26 09:12:19 crc kubenswrapper[4827]: I0126 09:12:19.989169 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef155af2-e9c9-45d6-8ea9-19ca71f455d1" containerID="4e40fb522fef24371fc16f6e73c598d0d43a31f6353fd1d29c563a53a881b28d" exitCode=0 Jan 26 09:12:19 crc kubenswrapper[4827]: I0126 09:12:19.989228 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f9b" event={"ID":"ef155af2-e9c9-45d6-8ea9-19ca71f455d1","Type":"ContainerDied","Data":"4e40fb522fef24371fc16f6e73c598d0d43a31f6353fd1d29c563a53a881b28d"} Jan 26 09:12:19 crc kubenswrapper[4827]: I0126 09:12:19.989277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f9b" event={"ID":"ef155af2-e9c9-45d6-8ea9-19ca71f455d1","Type":"ContainerStarted","Data":"7d4f65ba740e5525a61fd4164287b74a97d69156a87c32df41a931aaa59dc2e7"} Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.639477 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jcwnv"] Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.640922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.642972 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.648199 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcwnv"] Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.747796 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmt9\" (UniqueName: \"kubernetes.io/projected/e3295c9e-728c-4747-ab65-ee52cd048562-kube-api-access-xzmt9\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.747845 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-catalog-content\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.747911 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-utilities\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.838271 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-md9m5"] Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.839442 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.842965 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.849128 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzmt9\" (UniqueName: \"kubernetes.io/projected/e3295c9e-728c-4747-ab65-ee52cd048562-kube-api-access-xzmt9\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.849174 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-catalog-content\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.849245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-utilities\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.849600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-catalog-content\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.849756 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3295c9e-728c-4747-ab65-ee52cd048562-utilities\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.850471 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-md9m5"] Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.878191 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzmt9\" (UniqueName: \"kubernetes.io/projected/e3295c9e-728c-4747-ab65-ee52cd048562-kube-api-access-xzmt9\") pod \"certified-operators-jcwnv\" (UID: \"e3295c9e-728c-4747-ab65-ee52cd048562\") " pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.950713 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-utilities\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.950778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcsmd\" (UniqueName: \"kubernetes.io/projected/9381f7b0-db74-4848-b768-ee1071501178-kube-api-access-bcsmd\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.950814 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-catalog-content\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.973990 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.996915 4827 generic.go:334] "Generic (PLEG): container finished" podID="14e1c005-d10a-430f-881a-a222cd695727" containerID="b2a9cc7e9b6760d77a15526967992dc00b6a56f7f90f69781ba7434e66b8f3b0" exitCode=0 Jan 26 09:12:20 crc kubenswrapper[4827]: I0126 09:12:20.996996 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fngdm" event={"ID":"14e1c005-d10a-430f-881a-a222cd695727","Type":"ContainerDied","Data":"b2a9cc7e9b6760d77a15526967992dc00b6a56f7f90f69781ba7434e66b8f3b0"} Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.002875 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f9b" event={"ID":"ef155af2-e9c9-45d6-8ea9-19ca71f455d1","Type":"ContainerStarted","Data":"622745bdd08c7d89085518e439d14dbd45b1f97184c17027db42b6e2ed50710b"} Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.051453 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-catalog-content\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.051562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-utilities\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.051623 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcsmd\" (UniqueName: \"kubernetes.io/projected/9381f7b0-db74-4848-b768-ee1071501178-kube-api-access-bcsmd\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.052383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-utilities\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.052695 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381f7b0-db74-4848-b768-ee1071501178-catalog-content\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.082892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcsmd\" (UniqueName: \"kubernetes.io/projected/9381f7b0-db74-4848-b768-ee1071501178-kube-api-access-bcsmd\") pod \"community-operators-md9m5\" (UID: \"9381f7b0-db74-4848-b768-ee1071501178\") " pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.158101 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.428949 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcwnv"] Jan 26 09:12:21 crc kubenswrapper[4827]: W0126 09:12:21.436415 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3295c9e_728c_4747_ab65_ee52cd048562.slice/crio-f214b2aa0ca890c191b123ee888e11b72b1b948e3ae4c87942fadc61b6632ab5 WatchSource:0}: Error finding container f214b2aa0ca890c191b123ee888e11b72b1b948e3ae4c87942fadc61b6632ab5: Status 404 returned error can't find the container with id f214b2aa0ca890c191b123ee888e11b72b1b948e3ae4c87942fadc61b6632ab5 Jan 26 09:12:21 crc kubenswrapper[4827]: I0126 09:12:21.589452 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-md9m5"] Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.011191 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fngdm" event={"ID":"14e1c005-d10a-430f-881a-a222cd695727","Type":"ContainerStarted","Data":"f36994df068a4da9aa766d57b72266600b2b42133fa2d7c426677ae577170757"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.013907 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef155af2-e9c9-45d6-8ea9-19ca71f455d1" containerID="622745bdd08c7d89085518e439d14dbd45b1f97184c17027db42b6e2ed50710b" exitCode=0 Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.013945 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f9b" event={"ID":"ef155af2-e9c9-45d6-8ea9-19ca71f455d1","Type":"ContainerDied","Data":"622745bdd08c7d89085518e439d14dbd45b1f97184c17027db42b6e2ed50710b"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.020109 4827 generic.go:334] "Generic (PLEG): container finished" podID="9381f7b0-db74-4848-b768-ee1071501178" containerID="5bd1d515fe513e878c42425093f8604673585f0e20f3f08d014c48053b097af1" exitCode=0 Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.020206 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md9m5" event={"ID":"9381f7b0-db74-4848-b768-ee1071501178","Type":"ContainerDied","Data":"5bd1d515fe513e878c42425093f8604673585f0e20f3f08d014c48053b097af1"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.020231 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md9m5" event={"ID":"9381f7b0-db74-4848-b768-ee1071501178","Type":"ContainerStarted","Data":"c41e3d63a55502f8f5d7e7faddf7b502affcfbdfa7acbd13c7d8ea92315d5ebe"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.024869 4827 generic.go:334] "Generic (PLEG): container finished" podID="e3295c9e-728c-4747-ab65-ee52cd048562" containerID="89b3a40265720d5bd8a2b6b3b49fc4dbf17463c626f458b80cc40e34494b36b8" exitCode=0 Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.024912 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwnv" event={"ID":"e3295c9e-728c-4747-ab65-ee52cd048562","Type":"ContainerDied","Data":"89b3a40265720d5bd8a2b6b3b49fc4dbf17463c626f458b80cc40e34494b36b8"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.024937 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwnv" event={"ID":"e3295c9e-728c-4747-ab65-ee52cd048562","Type":"ContainerStarted","Data":"f214b2aa0ca890c191b123ee888e11b72b1b948e3ae4c87942fadc61b6632ab5"} Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.034752 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fngdm" podStartSLOduration=1.512775162 podStartE2EDuration="4.034732286s" podCreationTimestamp="2026-01-26 09:12:18 +0000 UTC" firstStartedPulling="2026-01-26 09:12:18.983441745 +0000 UTC m=+367.632113564" lastFinishedPulling="2026-01-26 09:12:21.505398869 +0000 UTC m=+370.154070688" observedRunningTime="2026-01-26 09:12:22.027902108 +0000 UTC m=+370.676573927" watchObservedRunningTime="2026-01-26 09:12:22.034732286 +0000 UTC m=+370.683404105" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.915238 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4v9kf"] Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.916872 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981190 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvgp\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-kube-api-access-qdvgp\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981260 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-tls\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-bound-sa-token\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981306 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981345 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-certificates\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981394 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-trusted-ca\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:22 crc kubenswrapper[4827]: I0126 09:12:22.981458 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.033613 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f9b" event={"ID":"ef155af2-e9c9-45d6-8ea9-19ca71f455d1","Type":"ContainerStarted","Data":"389b3284a6076b1b8ad41d494daa12d699a7d0cecf7da2b42f6a0265cf5d40ca"} Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.037622 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md9m5" event={"ID":"9381f7b0-db74-4848-b768-ee1071501178","Type":"ContainerStarted","Data":"952bff6e1e17a00c077142f72f96ab549e309604c8a8799655670db0aa528b5e"} Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.045412 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4v9kf"] Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.082788 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvgp\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-kube-api-access-qdvgp\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083158 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-tls\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083187 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-bound-sa-token\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083243 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083266 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-certificates\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.083291 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-trusted-ca\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.084519 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-trusted-ca\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.085791 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.086696 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-certificates\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.099850 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-registry-tls\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.102417 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9f9b" podStartSLOduration=2.660181383 podStartE2EDuration="5.102393931s" podCreationTimestamp="2026-01-26 09:12:18 +0000 UTC" firstStartedPulling="2026-01-26 09:12:19.990829074 +0000 UTC m=+368.639500893" lastFinishedPulling="2026-01-26 09:12:22.433041622 +0000 UTC m=+371.081713441" observedRunningTime="2026-01-26 09:12:23.100462362 +0000 UTC m=+371.749134181" watchObservedRunningTime="2026-01-26 09:12:23.102393931 +0000 UTC m=+371.751065750" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.105218 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.121702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-bound-sa-token\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.138600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvgp\" (UniqueName: \"kubernetes.io/projected/46b1ccad-6b29-4eb2-99ec-8638ef5b269a-kube-api-access-qdvgp\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.140677 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4v9kf\" (UID: \"46b1ccad-6b29-4eb2-99ec-8638ef5b269a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.232337 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:23 crc kubenswrapper[4827]: I0126 09:12:23.826105 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4v9kf"] Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.044652 4827 generic.go:334] "Generic (PLEG): container finished" podID="9381f7b0-db74-4848-b768-ee1071501178" containerID="952bff6e1e17a00c077142f72f96ab549e309604c8a8799655670db0aa528b5e" exitCode=0 Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.044716 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md9m5" event={"ID":"9381f7b0-db74-4848-b768-ee1071501178","Type":"ContainerDied","Data":"952bff6e1e17a00c077142f72f96ab549e309604c8a8799655670db0aa528b5e"} Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.050139 4827 generic.go:334] "Generic (PLEG): container finished" podID="e3295c9e-728c-4747-ab65-ee52cd048562" containerID="e02544b7e63aad2684b999c82db0f3866bbdfe238864397683f6c43d1059bff8" exitCode=0 Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.050257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwnv" event={"ID":"e3295c9e-728c-4747-ab65-ee52cd048562","Type":"ContainerDied","Data":"e02544b7e63aad2684b999c82db0f3866bbdfe238864397683f6c43d1059bff8"} Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.053181 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" event={"ID":"46b1ccad-6b29-4eb2-99ec-8638ef5b269a","Type":"ContainerStarted","Data":"5cda69f2a50811659fb6ac9c0059598439821b5005cf3b3c08a854b8cf2d0792"} Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.053300 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.053383 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" event={"ID":"46b1ccad-6b29-4eb2-99ec-8638ef5b269a","Type":"ContainerStarted","Data":"e4e398bfe09b137a88dfe68b2b3350eafecc3304ce5aeab584335b2ee42147ad"} Jan 26 09:12:24 crc kubenswrapper[4827]: I0126 09:12:24.113539 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" podStartSLOduration=2.113516153 podStartE2EDuration="2.113516153s" podCreationTimestamp="2026-01-26 09:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:12:24.110155202 +0000 UTC m=+372.758827021" watchObservedRunningTime="2026-01-26 09:12:24.113516153 +0000 UTC m=+372.762187972" Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.059371 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md9m5" event={"ID":"9381f7b0-db74-4848-b768-ee1071501178","Type":"ContainerStarted","Data":"1eda5cf4fcdc0e8e813682cb8eb04d079a8db5700714f55f55c2123a53f545be"} Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.064294 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwnv" event={"ID":"e3295c9e-728c-4747-ab65-ee52cd048562","Type":"ContainerStarted","Data":"c15430aa3791238cedbb1c7c7a2c4cec7820a7f8fec47d467308b859cb7a66c9"} Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.120503 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-md9m5" podStartSLOduration=2.461230115 podStartE2EDuration="5.120487049s" podCreationTimestamp="2026-01-26 09:12:20 +0000 UTC" firstStartedPulling="2026-01-26 09:12:22.022857295 +0000 UTC m=+370.671529114" lastFinishedPulling="2026-01-26 09:12:24.682114229 +0000 UTC m=+373.330786048" observedRunningTime="2026-01-26 09:12:25.101511794 +0000 UTC m=+373.750183613" watchObservedRunningTime="2026-01-26 09:12:25.120487049 +0000 UTC m=+373.769158868" Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.523371 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jcwnv" podStartSLOduration=3.012546239 podStartE2EDuration="5.523349935s" podCreationTimestamp="2026-01-26 09:12:20 +0000 UTC" firstStartedPulling="2026-01-26 09:12:22.026365772 +0000 UTC m=+370.675037591" lastFinishedPulling="2026-01-26 09:12:24.537169468 +0000 UTC m=+373.185841287" observedRunningTime="2026-01-26 09:12:25.11752207 +0000 UTC m=+373.766193889" watchObservedRunningTime="2026-01-26 09:12:25.523349935 +0000 UTC m=+374.172021754" Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.527388 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.527612 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" podUID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" containerName="controller-manager" containerID="cri-o://6eb461c3ada568333e70cf97b1e56e077601a6b5a2bb7d02093e688f57805fa5" gracePeriod=30 Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.537083 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:12:25 crc kubenswrapper[4827]: I0126 09:12:25.537296 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" podUID="8567781c-9f3a-4de8-a74a-fca92c156fc8" containerName="route-controller-manager" containerID="cri-o://268827b1e9e4e98935a4dd1348e6efac7f237877b215522d07cb5105d0fe0a3f" gracePeriod=30 Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.071954 4827 generic.go:334] "Generic (PLEG): container finished" podID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" containerID="6eb461c3ada568333e70cf97b1e56e077601a6b5a2bb7d02093e688f57805fa5" exitCode=0 Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.072034 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" event={"ID":"223c2179-2f20-4d48-83b8-4753a9f9d2c1","Type":"ContainerDied","Data":"6eb461c3ada568333e70cf97b1e56e077601a6b5a2bb7d02093e688f57805fa5"} Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.075116 4827 generic.go:334] "Generic (PLEG): container finished" podID="8567781c-9f3a-4de8-a74a-fca92c156fc8" containerID="268827b1e9e4e98935a4dd1348e6efac7f237877b215522d07cb5105d0fe0a3f" exitCode=0 Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.075304 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" event={"ID":"8567781c-9f3a-4de8-a74a-fca92c156fc8","Type":"ContainerDied","Data":"268827b1e9e4e98935a4dd1348e6efac7f237877b215522d07cb5105d0fe0a3f"} Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.695825 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.733871 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr"] Jan 26 09:12:26 crc kubenswrapper[4827]: E0126 09:12:26.734093 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" containerName="controller-manager" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.734112 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" containerName="controller-manager" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.734232 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" containerName="controller-manager" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.739915 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.744949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config\") pod \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.744997 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw66x\" (UniqueName: \"kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x\") pod \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.745052 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles\") pod \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.745127 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert\") pod \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.745150 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca\") pod \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\" (UID: \"223c2179-2f20-4d48-83b8-4753a9f9d2c1\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.747310 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "223c2179-2f20-4d48-83b8-4753a9f9d2c1" (UID: "223c2179-2f20-4d48-83b8-4753a9f9d2c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.747583 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config" (OuterVolumeSpecName: "config") pod "223c2179-2f20-4d48-83b8-4753a9f9d2c1" (UID: "223c2179-2f20-4d48-83b8-4753a9f9d2c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.747653 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "223c2179-2f20-4d48-83b8-4753a9f9d2c1" (UID: "223c2179-2f20-4d48-83b8-4753a9f9d2c1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.750875 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr"] Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.763327 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "223c2179-2f20-4d48-83b8-4753a9f9d2c1" (UID: "223c2179-2f20-4d48-83b8-4753a9f9d2c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.765818 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x" (OuterVolumeSpecName: "kube-api-access-cw66x") pod "223c2179-2f20-4d48-83b8-4753a9f9d2c1" (UID: "223c2179-2f20-4d48-83b8-4753a9f9d2c1"). InnerVolumeSpecName "kube-api-access-cw66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847172 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64139030-bd3f-448d-9ef3-93178dca0eff-serving-cert\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847297 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-proxy-ca-bundles\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847337 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-client-ca\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-config\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6xw\" (UniqueName: \"kubernetes.io/projected/64139030-bd3f-448d-9ef3-93178dca0eff-kube-api-access-qt6xw\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847518 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847540 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw66x\" (UniqueName: \"kubernetes.io/projected/223c2179-2f20-4d48-83b8-4753a9f9d2c1-kube-api-access-cw66x\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847555 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847566 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223c2179-2f20-4d48-83b8-4753a9f9d2c1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.847578 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223c2179-2f20-4d48-83b8-4753a9f9d2c1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.870048 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.948382 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca\") pod \"8567781c-9f3a-4de8-a74a-fca92c156fc8\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.948489 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff77n\" (UniqueName: \"kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n\") pod \"8567781c-9f3a-4de8-a74a-fca92c156fc8\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949022 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config\") pod \"8567781c-9f3a-4de8-a74a-fca92c156fc8\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949073 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert\") pod \"8567781c-9f3a-4de8-a74a-fca92c156fc8\" (UID: \"8567781c-9f3a-4de8-a74a-fca92c156fc8\") " Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949107 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8567781c-9f3a-4de8-a74a-fca92c156fc8" (UID: "8567781c-9f3a-4de8-a74a-fca92c156fc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949241 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config" (OuterVolumeSpecName: "config") pod "8567781c-9f3a-4de8-a74a-fca92c156fc8" (UID: "8567781c-9f3a-4de8-a74a-fca92c156fc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949241 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-client-ca\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949357 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-config\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949415 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt6xw\" (UniqueName: \"kubernetes.io/projected/64139030-bd3f-448d-9ef3-93178dca0eff-kube-api-access-qt6xw\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949486 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64139030-bd3f-448d-9ef3-93178dca0eff-serving-cert\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949543 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-proxy-ca-bundles\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949592 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949605 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8567781c-9f3a-4de8-a74a-fca92c156fc8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.949995 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-client-ca\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.950387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-config\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.950457 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64139030-bd3f-448d-9ef3-93178dca0eff-proxy-ca-bundles\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.953519 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n" (OuterVolumeSpecName: "kube-api-access-ff77n") pod "8567781c-9f3a-4de8-a74a-fca92c156fc8" (UID: "8567781c-9f3a-4de8-a74a-fca92c156fc8"). InnerVolumeSpecName "kube-api-access-ff77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.953747 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8567781c-9f3a-4de8-a74a-fca92c156fc8" (UID: "8567781c-9f3a-4de8-a74a-fca92c156fc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.953953 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64139030-bd3f-448d-9ef3-93178dca0eff-serving-cert\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:26 crc kubenswrapper[4827]: I0126 09:12:26.967249 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt6xw\" (UniqueName: \"kubernetes.io/projected/64139030-bd3f-448d-9ef3-93178dca0eff-kube-api-access-qt6xw\") pod \"controller-manager-6cbf65cdf4-lw6sr\" (UID: \"64139030-bd3f-448d-9ef3-93178dca0eff\") " pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.051248 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff77n\" (UniqueName: \"kubernetes.io/projected/8567781c-9f3a-4de8-a74a-fca92c156fc8-kube-api-access-ff77n\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.051453 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8567781c-9f3a-4de8-a74a-fca92c156fc8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.088968 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.089040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7tp5j" event={"ID":"223c2179-2f20-4d48-83b8-4753a9f9d2c1","Type":"ContainerDied","Data":"4b11a4f5ff6c7b229a6fb235c8836ec5b28c392417cefc6be67389324d8e9430"} Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.089389 4827 scope.go:117] "RemoveContainer" containerID="6eb461c3ada568333e70cf97b1e56e077601a6b5a2bb7d02093e688f57805fa5" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.098549 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.100315 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" event={"ID":"8567781c-9f3a-4de8-a74a-fca92c156fc8","Type":"ContainerDied","Data":"2ae17c8c4e6f0daad7c176e1f89ce85bc5a91e760a7f032406a14d4a437c0992"} Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.100358 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.126806 4827 scope.go:117] "RemoveContainer" containerID="268827b1e9e4e98935a4dd1348e6efac7f237877b215522d07cb5105d0fe0a3f" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.128390 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.132784 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7tp5j"] Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.148684 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.153378 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-rwflp"] Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.617433 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr"] Jan 26 09:12:27 crc kubenswrapper[4827]: W0126 09:12:27.621211 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64139030_bd3f_448d_9ef3_93178dca0eff.slice/crio-5dadf24b2c139d0d5417219f99227db0dcfeef9de35f78736daa166702f1123b WatchSource:0}: Error finding container 5dadf24b2c139d0d5417219f99227db0dcfeef9de35f78736daa166702f1123b: Status 404 returned error can't find the container with id 5dadf24b2c139d0d5417219f99227db0dcfeef9de35f78736daa166702f1123b Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.715526 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223c2179-2f20-4d48-83b8-4753a9f9d2c1" path="/var/lib/kubelet/pods/223c2179-2f20-4d48-83b8-4753a9f9d2c1/volumes" Jan 26 09:12:27 crc kubenswrapper[4827]: I0126 09:12:27.716307 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8567781c-9f3a-4de8-a74a-fca92c156fc8" path="/var/lib/kubelet/pods/8567781c-9f3a-4de8-a74a-fca92c156fc8/volumes" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.106941 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" event={"ID":"64139030-bd3f-448d-9ef3-93178dca0eff","Type":"ContainerStarted","Data":"5dadf24b2c139d0d5417219f99227db0dcfeef9de35f78736daa166702f1123b"} Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.561306 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.561785 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.611184 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.773932 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.774778 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.823978 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f"] Jan 26 09:12:28 crc kubenswrapper[4827]: E0126 09:12:28.824180 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8567781c-9f3a-4de8-a74a-fca92c156fc8" containerName="route-controller-manager" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.824191 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8567781c-9f3a-4de8-a74a-fca92c156fc8" containerName="route-controller-manager" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.824281 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8567781c-9f3a-4de8-a74a-fca92c156fc8" containerName="route-controller-manager" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.824599 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.826541 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.828762 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.828827 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.829050 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.829345 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.830991 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.841454 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f"] Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.879998 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6704ee0-edf4-4461-8215-d98f274f58b5-serving-cert\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.880041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjgf\" (UniqueName: \"kubernetes.io/projected/d6704ee0-edf4-4461-8215-d98f274f58b5-kube-api-access-mfjgf\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.880122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-client-ca\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.880234 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-config\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.981373 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-config\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.981439 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6704ee0-edf4-4461-8215-d98f274f58b5-serving-cert\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.981467 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfjgf\" (UniqueName: \"kubernetes.io/projected/d6704ee0-edf4-4461-8215-d98f274f58b5-kube-api-access-mfjgf\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.981555 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-client-ca\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.982814 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-client-ca\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.982998 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6704ee0-edf4-4461-8215-d98f274f58b5-config\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:28 crc kubenswrapper[4827]: I0126 09:12:28.988105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6704ee0-edf4-4461-8215-d98f274f58b5-serving-cert\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.007191 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfjgf\" (UniqueName: \"kubernetes.io/projected/d6704ee0-edf4-4461-8215-d98f274f58b5-kube-api-access-mfjgf\") pod \"route-controller-manager-667d4cd98f-fcz9f\" (UID: \"d6704ee0-edf4-4461-8215-d98f274f58b5\") " pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.117174 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" event={"ID":"64139030-bd3f-448d-9ef3-93178dca0eff","Type":"ContainerStarted","Data":"9148d405b0a7ef24bcfe216bf382c42370b56a819c29bad08e5391341d002173"} Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.144885 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.171969 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fngdm" Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.197224 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" podStartSLOduration=4.197208545 podStartE2EDuration="4.197208545s" podCreationTimestamp="2026-01-26 09:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:12:29.140122726 +0000 UTC m=+377.788794545" watchObservedRunningTime="2026-01-26 09:12:29.197208545 +0000 UTC m=+377.845880364" Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.411897 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f"] Jan 26 09:12:29 crc kubenswrapper[4827]: W0126 09:12:29.418668 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6704ee0_edf4_4461_8215_d98f274f58b5.slice/crio-d474531be37495e95162059fe8a61a139f539f48d9d5a66e6e8e5580c4ad9047 WatchSource:0}: Error finding container d474531be37495e95162059fe8a61a139f539f48d9d5a66e6e8e5580c4ad9047: Status 404 returned error can't find the container with id d474531be37495e95162059fe8a61a139f539f48d9d5a66e6e8e5580c4ad9047 Jan 26 09:12:29 crc kubenswrapper[4827]: I0126 09:12:29.824667 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9f9b" podUID="ef155af2-e9c9-45d6-8ea9-19ca71f455d1" containerName="registry-server" probeResult="failure" output=< Jan 26 09:12:29 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:12:29 crc kubenswrapper[4827]: > Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.123029 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" event={"ID":"d6704ee0-edf4-4461-8215-d98f274f58b5","Type":"ContainerStarted","Data":"baa5e87df83cf9404fa5736e30d2d1e7eceaeafe4abce0412ffcc7c9e116ff6f"} Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.123403 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" event={"ID":"d6704ee0-edf4-4461-8215-d98f274f58b5","Type":"ContainerStarted","Data":"d474531be37495e95162059fe8a61a139f539f48d9d5a66e6e8e5580c4ad9047"} Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.124010 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.128234 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cbf65cdf4-lw6sr" Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.149916 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" podStartSLOduration=5.149892778 podStartE2EDuration="5.149892778s" podCreationTimestamp="2026-01-26 09:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:12:30.141745241 +0000 UTC m=+378.790417060" watchObservedRunningTime="2026-01-26 09:12:30.149892778 +0000 UTC m=+378.798564597" Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.974581 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:30 crc kubenswrapper[4827]: I0126 09:12:30.974743 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.025867 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.159833 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.160691 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.169738 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.174840 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-667d4cd98f-fcz9f" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.225664 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:31 crc kubenswrapper[4827]: I0126 09:12:31.278661 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jcwnv" Jan 26 09:12:32 crc kubenswrapper[4827]: I0126 09:12:32.222573 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-md9m5" Jan 26 09:12:38 crc kubenswrapper[4827]: I0126 09:12:38.824247 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:38 crc kubenswrapper[4827]: I0126 09:12:38.867541 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9f9b" Jan 26 09:12:42 crc kubenswrapper[4827]: I0126 09:12:42.268343 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:12:42 crc kubenswrapper[4827]: I0126 09:12:42.269758 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:12:43 crc kubenswrapper[4827]: I0126 09:12:43.238076 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4v9kf" Jan 26 09:12:43 crc kubenswrapper[4827]: I0126 09:12:43.295722 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.336209 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" podUID="73eaaf34-a59b-4525-8a07-bd177f7b0995" containerName="registry" containerID="cri-o://10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc" gracePeriod=30 Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.688361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821520 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821610 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821684 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stl6s\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821769 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821931 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.821959 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.822003 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token\") pod \"73eaaf34-a59b-4525-8a07-bd177f7b0995\" (UID: \"73eaaf34-a59b-4525-8a07-bd177f7b0995\") " Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.822601 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.823263 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.830934 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.832829 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.833203 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.833275 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s" (OuterVolumeSpecName: "kube-api-access-stl6s") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "kube-api-access-stl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.837796 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.841215 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "73eaaf34-a59b-4525-8a07-bd177f7b0995" (UID: "73eaaf34-a59b-4525-8a07-bd177f7b0995"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923336 4827 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923376 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923390 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stl6s\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-kube-api-access-stl6s\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923403 4827 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eaaf34-a59b-4525-8a07-bd177f7b0995-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923417 4827 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eaaf34-a59b-4525-8a07-bd177f7b0995-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923430 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eaaf34-a59b-4525-8a07-bd177f7b0995-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:08 crc kubenswrapper[4827]: I0126 09:13:08.923440 4827 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eaaf34-a59b-4525-8a07-bd177f7b0995-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.424902 4827 generic.go:334] "Generic (PLEG): container finished" podID="73eaaf34-a59b-4525-8a07-bd177f7b0995" containerID="10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc" exitCode=0 Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.424950 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" event={"ID":"73eaaf34-a59b-4525-8a07-bd177f7b0995","Type":"ContainerDied","Data":"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc"} Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.424981 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" event={"ID":"73eaaf34-a59b-4525-8a07-bd177f7b0995","Type":"ContainerDied","Data":"14baccd4ae15c62eaa653d888b6ce3824f2c3137c2c94a7c5571516258ad4eb3"} Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.425016 4827 scope.go:117] "RemoveContainer" containerID="10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc" Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.425138 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll4jw" Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.441815 4827 scope.go:117] "RemoveContainer" containerID="10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc" Jan 26 09:13:09 crc kubenswrapper[4827]: E0126 09:13:09.442862 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc\": container with ID starting with 10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc not found: ID does not exist" containerID="10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc" Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.442902 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc"} err="failed to get container status \"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc\": rpc error: code = NotFound desc = could not find container \"10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc\": container with ID starting with 10904c405e4f3d37e0a63d1845744655e082f763e2b8f3bc364c3f1e99d72bcc not found: ID does not exist" Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.457108 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.461431 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll4jw"] Jan 26 09:13:09 crc kubenswrapper[4827]: I0126 09:13:09.709070 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73eaaf34-a59b-4525-8a07-bd177f7b0995" path="/var/lib/kubelet/pods/73eaaf34-a59b-4525-8a07-bd177f7b0995/volumes" Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.268149 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.268467 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.268505 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.269020 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.269118 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1" gracePeriod=600 Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.448261 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1" exitCode=0 Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.448337 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1"} Jan 26 09:13:12 crc kubenswrapper[4827]: I0126 09:13:12.448641 4827 scope.go:117] "RemoveContainer" containerID="6382fd01e4b09b61f69ea88da6e87f1ca6fa68b5a5d0651ca76ba0fdc2f20094" Jan 26 09:13:13 crc kubenswrapper[4827]: I0126 09:13:13.456368 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784"} Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.175669 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf"] Jan 26 09:15:00 crc kubenswrapper[4827]: E0126 09:15:00.176428 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73eaaf34-a59b-4525-8a07-bd177f7b0995" containerName="registry" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.176443 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="73eaaf34-a59b-4525-8a07-bd177f7b0995" containerName="registry" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.176554 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="73eaaf34-a59b-4525-8a07-bd177f7b0995" containerName="registry" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.176993 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.178676 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.178676 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.187064 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf"] Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.224049 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.224102 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pb6\" (UniqueName: \"kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.224124 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.325785 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.325906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pb6\" (UniqueName: \"kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.325960 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.327904 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.333821 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.347618 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pb6\" (UniqueName: \"kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6\") pod \"collect-profiles-29490315-mx6jf\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.495113 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:00 crc kubenswrapper[4827]: I0126 09:15:00.662309 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf"] Jan 26 09:15:01 crc kubenswrapper[4827]: I0126 09:15:01.024982 4827 generic.go:334] "Generic (PLEG): container finished" podID="971c12d9-feea-474a-b3ad-58abe6658989" containerID="d1c0efa28af5c33ab132193bfb06c4d4e49c2c555a10d617a752996594dabc59" exitCode=0 Jan 26 09:15:01 crc kubenswrapper[4827]: I0126 09:15:01.025052 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" event={"ID":"971c12d9-feea-474a-b3ad-58abe6658989","Type":"ContainerDied","Data":"d1c0efa28af5c33ab132193bfb06c4d4e49c2c555a10d617a752996594dabc59"} Jan 26 09:15:01 crc kubenswrapper[4827]: I0126 09:15:01.025105 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" event={"ID":"971c12d9-feea-474a-b3ad-58abe6658989","Type":"ContainerStarted","Data":"8e4fab09319112289155502e7fcfa693968f5fb19497df1de939647fa32df721"} Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.274511 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.456565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7pb6\" (UniqueName: \"kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6\") pod \"971c12d9-feea-474a-b3ad-58abe6658989\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.457005 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume\") pod \"971c12d9-feea-474a-b3ad-58abe6658989\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.457056 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume\") pod \"971c12d9-feea-474a-b3ad-58abe6658989\" (UID: \"971c12d9-feea-474a-b3ad-58abe6658989\") " Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.457825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume" (OuterVolumeSpecName: "config-volume") pod "971c12d9-feea-474a-b3ad-58abe6658989" (UID: "971c12d9-feea-474a-b3ad-58abe6658989"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.461457 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6" (OuterVolumeSpecName: "kube-api-access-z7pb6") pod "971c12d9-feea-474a-b3ad-58abe6658989" (UID: "971c12d9-feea-474a-b3ad-58abe6658989"). InnerVolumeSpecName "kube-api-access-z7pb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.462352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "971c12d9-feea-474a-b3ad-58abe6658989" (UID: "971c12d9-feea-474a-b3ad-58abe6658989"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.558158 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7pb6\" (UniqueName: \"kubernetes.io/projected/971c12d9-feea-474a-b3ad-58abe6658989-kube-api-access-z7pb6\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.558207 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/971c12d9-feea-474a-b3ad-58abe6658989-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:02 crc kubenswrapper[4827]: I0126 09:15:02.558218 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c12d9-feea-474a-b3ad-58abe6658989-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:03 crc kubenswrapper[4827]: I0126 09:15:03.037464 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" event={"ID":"971c12d9-feea-474a-b3ad-58abe6658989","Type":"ContainerDied","Data":"8e4fab09319112289155502e7fcfa693968f5fb19497df1de939647fa32df721"} Jan 26 09:15:03 crc kubenswrapper[4827]: I0126 09:15:03.037501 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4fab09319112289155502e7fcfa693968f5fb19497df1de939647fa32df721" Jan 26 09:15:03 crc kubenswrapper[4827]: I0126 09:15:03.037556 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf" Jan 26 09:15:11 crc kubenswrapper[4827]: I0126 09:15:11.782326 4827 scope.go:117] "RemoveContainer" containerID="d3073134841a61f15590f5171294c8a81ef35427dc19befe4988ee1e156a99e6" Jan 26 09:15:12 crc kubenswrapper[4827]: I0126 09:15:12.268771 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:15:12 crc kubenswrapper[4827]: I0126 09:15:12.268848 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:15:42 crc kubenswrapper[4827]: I0126 09:15:42.268400 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:15:42 crc kubenswrapper[4827]: I0126 09:15:42.269038 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:16:11 crc kubenswrapper[4827]: I0126 09:16:11.821588 4827 scope.go:117] "RemoveContainer" containerID="89bee7923d21e6f4dae4b0f2f2a75b1802ea78cad0b7a5b784346fcb278fcf2a" Jan 26 09:16:11 crc kubenswrapper[4827]: I0126 09:16:11.843150 4827 scope.go:117] "RemoveContainer" containerID="666617bab3d2b52175add5027276b5047b0603fe7c3d61787ff23f22ad63b607" Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.268442 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.268497 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.268535 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.269144 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.269200 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784" gracePeriod=600 Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.421110 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784" exitCode=0 Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.421153 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784"} Jan 26 09:16:12 crc kubenswrapper[4827]: I0126 09:16:12.421185 4827 scope.go:117] "RemoveContainer" containerID="198e29b614cfc3eaf8297bce92a89ded2e76bf469011bc9da01a6edf821e85a1" Jan 26 09:16:13 crc kubenswrapper[4827]: I0126 09:16:13.426449 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca"} Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.457116 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-5ctth"] Jan 26 09:17:45 crc kubenswrapper[4827]: E0126 09:17:45.458039 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c12d9-feea-474a-b3ad-58abe6658989" containerName="collect-profiles" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.458059 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c12d9-feea-474a-b3ad-58abe6658989" containerName="collect-profiles" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.458171 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c12d9-feea-474a-b3ad-58abe6658989" containerName="collect-profiles" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.458703 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5ctth" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.460902 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj"] Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.461347 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.461712 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.461720 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.461768 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pcs4m" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.467245 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tgb56" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.479063 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pw552"] Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.479886 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.483422 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj"] Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.490301 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ccq94" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.497198 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5ctth"] Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.502184 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pw552"] Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.546445 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtnw\" (UniqueName: \"kubernetes.io/projected/4614716e-593c-44d6-b054-f33ad6966d0b-kube-api-access-5qtnw\") pod \"cert-manager-858654f9db-5ctth\" (UID: \"4614716e-593c-44d6-b054-f33ad6966d0b\") " pod="cert-manager/cert-manager-858654f9db-5ctth" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.546515 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rbz\" (UniqueName: \"kubernetes.io/projected/3da6c3f3-a01b-4f14-9028-a7e371a518d4-kube-api-access-p2rbz\") pod \"cert-manager-webhook-687f57d79b-pw552\" (UID: \"3da6c3f3-a01b-4f14-9028-a7e371a518d4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.546669 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jghr\" (UniqueName: \"kubernetes.io/projected/287fb4fc-a4a8-4758-8c18-ea75f9590b1a-kube-api-access-7jghr\") pod \"cert-manager-cainjector-cf98fcc89-lgxgj\" (UID: \"287fb4fc-a4a8-4758-8c18-ea75f9590b1a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.648222 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rbz\" (UniqueName: \"kubernetes.io/projected/3da6c3f3-a01b-4f14-9028-a7e371a518d4-kube-api-access-p2rbz\") pod \"cert-manager-webhook-687f57d79b-pw552\" (UID: \"3da6c3f3-a01b-4f14-9028-a7e371a518d4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.648310 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jghr\" (UniqueName: \"kubernetes.io/projected/287fb4fc-a4a8-4758-8c18-ea75f9590b1a-kube-api-access-7jghr\") pod \"cert-manager-cainjector-cf98fcc89-lgxgj\" (UID: \"287fb4fc-a4a8-4758-8c18-ea75f9590b1a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.648378 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qtnw\" (UniqueName: \"kubernetes.io/projected/4614716e-593c-44d6-b054-f33ad6966d0b-kube-api-access-5qtnw\") pod \"cert-manager-858654f9db-5ctth\" (UID: \"4614716e-593c-44d6-b054-f33ad6966d0b\") " pod="cert-manager/cert-manager-858654f9db-5ctth" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.668196 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rbz\" (UniqueName: \"kubernetes.io/projected/3da6c3f3-a01b-4f14-9028-a7e371a518d4-kube-api-access-p2rbz\") pod \"cert-manager-webhook-687f57d79b-pw552\" (UID: \"3da6c3f3-a01b-4f14-9028-a7e371a518d4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.673101 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qtnw\" (UniqueName: \"kubernetes.io/projected/4614716e-593c-44d6-b054-f33ad6966d0b-kube-api-access-5qtnw\") pod \"cert-manager-858654f9db-5ctth\" (UID: \"4614716e-593c-44d6-b054-f33ad6966d0b\") " pod="cert-manager/cert-manager-858654f9db-5ctth" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.674143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jghr\" (UniqueName: \"kubernetes.io/projected/287fb4fc-a4a8-4758-8c18-ea75f9590b1a-kube-api-access-7jghr\") pod \"cert-manager-cainjector-cf98fcc89-lgxgj\" (UID: \"287fb4fc-a4a8-4758-8c18-ea75f9590b1a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.780172 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5ctth" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.796091 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" Jan 26 09:17:45 crc kubenswrapper[4827]: I0126 09:17:45.804092 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:46 crc kubenswrapper[4827]: I0126 09:17:46.055243 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5ctth"] Jan 26 09:17:46 crc kubenswrapper[4827]: I0126 09:17:46.062281 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:17:46 crc kubenswrapper[4827]: I0126 09:17:46.079165 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj"] Jan 26 09:17:46 crc kubenswrapper[4827]: W0126 09:17:46.085380 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287fb4fc_a4a8_4758_8c18_ea75f9590b1a.slice/crio-3dbb3afc012e226ef68eb745c10e7d2b1fc1e07b4fd67d2dd8f0e0c67e5b81c0 WatchSource:0}: Error finding container 3dbb3afc012e226ef68eb745c10e7d2b1fc1e07b4fd67d2dd8f0e0c67e5b81c0: Status 404 returned error can't find the container with id 3dbb3afc012e226ef68eb745c10e7d2b1fc1e07b4fd67d2dd8f0e0c67e5b81c0 Jan 26 09:17:46 crc kubenswrapper[4827]: I0126 09:17:46.341180 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pw552"] Jan 26 09:17:46 crc kubenswrapper[4827]: W0126 09:17:46.343605 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3da6c3f3_a01b_4f14_9028_a7e371a518d4.slice/crio-179081eb44baf1b3cf7d6c8f331e56098d73c2f1a5802574cce78ae14992e192 WatchSource:0}: Error finding container 179081eb44baf1b3cf7d6c8f331e56098d73c2f1a5802574cce78ae14992e192: Status 404 returned error can't find the container with id 179081eb44baf1b3cf7d6c8f331e56098d73c2f1a5802574cce78ae14992e192 Jan 26 09:17:46 crc kubenswrapper[4827]: I0126 09:17:46.999076 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" event={"ID":"3da6c3f3-a01b-4f14-9028-a7e371a518d4","Type":"ContainerStarted","Data":"179081eb44baf1b3cf7d6c8f331e56098d73c2f1a5802574cce78ae14992e192"} Jan 26 09:17:47 crc kubenswrapper[4827]: I0126 09:17:47.000458 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5ctth" event={"ID":"4614716e-593c-44d6-b054-f33ad6966d0b","Type":"ContainerStarted","Data":"c5ac9615ce51b5cc6bdd13e3e9de83486a737f34e9d4de1faaf530b6dd45ee8e"} Jan 26 09:17:47 crc kubenswrapper[4827]: I0126 09:17:47.001671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" event={"ID":"287fb4fc-a4a8-4758-8c18-ea75f9590b1a","Type":"ContainerStarted","Data":"3dbb3afc012e226ef68eb745c10e7d2b1fc1e07b4fd67d2dd8f0e0c67e5b81c0"} Jan 26 09:17:49 crc kubenswrapper[4827]: I0126 09:17:49.013161 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" event={"ID":"287fb4fc-a4a8-4758-8c18-ea75f9590b1a","Type":"ContainerStarted","Data":"3468bdc9a3df3ad6e83f8ca2388bd98a3ccebb1a8fae8f202eb3059f03601c63"} Jan 26 09:17:49 crc kubenswrapper[4827]: I0126 09:17:49.033754 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lgxgj" podStartSLOduration=1.686339985 podStartE2EDuration="4.033670759s" podCreationTimestamp="2026-01-26 09:17:45 +0000 UTC" firstStartedPulling="2026-01-26 09:17:46.087800657 +0000 UTC m=+694.736472466" lastFinishedPulling="2026-01-26 09:17:48.435131421 +0000 UTC m=+697.083803240" observedRunningTime="2026-01-26 09:17:49.028400079 +0000 UTC m=+697.677071978" watchObservedRunningTime="2026-01-26 09:17:49.033670759 +0000 UTC m=+697.682342588" Jan 26 09:17:50 crc kubenswrapper[4827]: I0126 09:17:50.019438 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5ctth" event={"ID":"4614716e-593c-44d6-b054-f33ad6966d0b","Type":"ContainerStarted","Data":"a56a31bb62d878c0543cb28dea84d792a7ededec8ba9997d16d67d3d65b0b2ed"} Jan 26 09:17:50 crc kubenswrapper[4827]: I0126 09:17:50.022287 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" event={"ID":"3da6c3f3-a01b-4f14-9028-a7e371a518d4","Type":"ContainerStarted","Data":"0a8c9ab9ffafdebd230e0689f314d0d06997594217b6e22c83b1590f5d993c10"} Jan 26 09:17:50 crc kubenswrapper[4827]: I0126 09:17:50.022328 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:17:50 crc kubenswrapper[4827]: I0126 09:17:50.039561 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-5ctth" podStartSLOduration=1.329831588 podStartE2EDuration="5.03954484s" podCreationTimestamp="2026-01-26 09:17:45 +0000 UTC" firstStartedPulling="2026-01-26 09:17:46.062004473 +0000 UTC m=+694.710676292" lastFinishedPulling="2026-01-26 09:17:49.771717725 +0000 UTC m=+698.420389544" observedRunningTime="2026-01-26 09:17:50.033213413 +0000 UTC m=+698.681885232" watchObservedRunningTime="2026-01-26 09:17:50.03954484 +0000 UTC m=+698.688216659" Jan 26 09:17:50 crc kubenswrapper[4827]: I0126 09:17:50.100567 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" podStartSLOduration=1.6687565009999998 podStartE2EDuration="5.100542017s" podCreationTimestamp="2026-01-26 09:17:45 +0000 UTC" firstStartedPulling="2026-01-26 09:17:46.345970938 +0000 UTC m=+694.994642757" lastFinishedPulling="2026-01-26 09:17:49.777756454 +0000 UTC m=+698.426428273" observedRunningTime="2026-01-26 09:17:50.09648929 +0000 UTC m=+698.745161129" watchObservedRunningTime="2026-01-26 09:17:50.100542017 +0000 UTC m=+698.749213836" Jan 26 09:17:55 crc kubenswrapper[4827]: I0126 09:17:55.808385 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-pw552" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.384919 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9xkm"] Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.385975 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-controller" containerID="cri-o://27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386198 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="sbdb" containerID="cri-o://3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386236 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386303 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="nbdb" containerID="cri-o://2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386318 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-node" containerID="cri-o://31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386350 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="northd" containerID="cri-o://03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.386452 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-acl-logging" containerID="cri-o://dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.419049 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" containerID="cri-o://4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" gracePeriod=30 Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.738378 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/3.log" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.740801 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovn-acl-logging/0.log" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.741331 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovn-controller/0.log" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.741808 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808028 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808083 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808109 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808125 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808148 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808169 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808195 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gss4q\" (UniqueName: \"kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808215 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808248 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808269 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808289 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808309 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808323 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808347 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808358 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808372 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808391 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808409 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808424 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.808452 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert\") pod \"3ba16376-c20a-411b-b45a-d7e718fbbac0\" (UID: \"3ba16376-c20a-411b-b45a-d7e718fbbac0\") " Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809211 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809291 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809318 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket" (OuterVolumeSpecName: "log-socket") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809344 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log" (OuterVolumeSpecName: "node-log") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809367 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809390 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809461 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809488 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash" (OuterVolumeSpecName: "host-slash") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809545 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809611 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809685 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809712 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809741 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809741 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.809760 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.810003 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.810178 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.814274 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.814280 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q" (OuterVolumeSpecName: "kube-api-access-gss4q") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "kube-api-access-gss4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.821957 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3ba16376-c20a-411b-b45a-d7e718fbbac0" (UID: "3ba16376-c20a-411b-b45a-d7e718fbbac0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.843239 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lj7pn"] Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.843627 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kubecfg-setup" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.843717 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kubecfg-setup" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.843767 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.843831 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.843885 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.843932 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.843979 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="northd" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844035 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="northd" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844086 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="sbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844132 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="sbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844185 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844232 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844280 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-node" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844325 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-node" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844373 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="nbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844418 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="nbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844475 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-acl-logging" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844523 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-acl-logging" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844569 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844616 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844680 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844727 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.844783 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844830 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.844946 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-node" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845006 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845059 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845105 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="nbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845152 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="northd" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845202 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845250 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845295 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovn-acl-logging" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845358 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="sbdb" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845412 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: E0126 09:18:05.845563 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845656 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845815 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.845870 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerName="ovnkube-controller" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.862204 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909224 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7cr\" (UniqueName: \"kubernetes.io/projected/777295a4-0611-42eb-be36-2cb975d1d29d-kube-api-access-th7cr\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-script-lib\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909320 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-slash\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909346 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-env-overrides\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909364 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-systemd-units\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909383 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/777295a4-0611-42eb-be36-2cb975d1d29d-ovn-node-metrics-cert\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909401 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909418 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-netns\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909438 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-kubelet\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909470 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-etc-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909492 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-node-log\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909509 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-ovn\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909524 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-netd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909540 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-config\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909579 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-var-lib-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-log-socket\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909618 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-systemd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909690 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-bin\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909718 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909776 4827 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909787 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909796 4827 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909805 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909832 4827 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909841 4827 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909850 4827 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909861 4827 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909933 4827 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909967 4827 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.909987 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gss4q\" (UniqueName: \"kubernetes.io/projected/3ba16376-c20a-411b-b45a-d7e718fbbac0-kube-api-access-gss4q\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910002 4827 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910011 4827 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910019 4827 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910028 4827 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910035 4827 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910043 4827 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910051 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910059 4827 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3ba16376-c20a-411b-b45a-d7e718fbbac0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:05 crc kubenswrapper[4827]: I0126 09:18:05.910067 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ba16376-c20a-411b-b45a-d7e718fbbac0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.011481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-systemd-units\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.011979 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/777295a4-0611-42eb-be36-2cb975d1d29d-ovn-node-metrics-cert\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.012898 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.012957 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-netns\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.012984 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-kubelet\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.011914 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-systemd-units\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013046 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-etc-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013004 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-etc-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-kubelet\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013094 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-node-log\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013096 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-netns\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013127 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-node-log\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013151 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-ovn\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-netd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013202 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-config\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013223 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-ovn\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013247 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-netd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-var-lib-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013430 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-log-socket\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-systemd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013492 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-bin\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013523 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-log-socket\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013562 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-run-systemd\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013599 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7cr\" (UniqueName: \"kubernetes.io/projected/777295a4-0611-42eb-be36-2cb975d1d29d-kube-api-access-th7cr\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013601 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-cni-bin\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013621 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-script-lib\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013661 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-slash\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-env-overrides\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013727 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013487 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-var-lib-openvswitch\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013833 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/777295a4-0611-42eb-be36-2cb975d1d29d-host-slash\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.013993 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-config\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.014385 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-env-overrides\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.014706 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/777295a4-0611-42eb-be36-2cb975d1d29d-ovnkube-script-lib\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.018511 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/777295a4-0611-42eb-be36-2cb975d1d29d-ovn-node-metrics-cert\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.035177 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7cr\" (UniqueName: \"kubernetes.io/projected/777295a4-0611-42eb-be36-2cb975d1d29d-kube-api-access-th7cr\") pod \"ovnkube-node-lj7pn\" (UID: \"777295a4-0611-42eb-be36-2cb975d1d29d\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.126538 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovnkube-controller/3.log" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.130825 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovn-acl-logging/0.log" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.131734 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9xkm_3ba16376-c20a-411b-b45a-d7e718fbbac0/ovn-controller/0.log" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132274 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132323 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132347 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132367 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132385 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132404 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" exitCode=0 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132424 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" exitCode=143 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132446 4827 generic.go:334] "Generic (PLEG): container finished" podID="3ba16376-c20a-411b-b45a-d7e718fbbac0" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" exitCode=143 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132450 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132548 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132599 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132631 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132693 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132720 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132745 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132768 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132789 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132804 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132818 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132832 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132846 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132861 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132875 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132889 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132930 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132946 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132960 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132974 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.132988 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133003 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133017 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133030 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133044 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133058 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133079 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133101 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133117 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133131 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133148 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133162 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133175 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133190 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133204 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133217 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133230 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133248 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9xkm" event={"ID":"3ba16376-c20a-411b-b45a-d7e718fbbac0","Type":"ContainerDied","Data":"732af990ba015c7db066924bb7b311eddcd6fea77089bb0880ce342ded80684e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133268 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133283 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133296 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133309 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133323 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133336 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133349 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133362 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133375 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133387 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.133415 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.137299 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/2.log" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.138783 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/1.log" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.138928 4827 generic.go:334] "Generic (PLEG): container finished" podID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" containerID="1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d" exitCode=2 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.139007 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerDied","Data":"1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.139043 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a"} Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.140088 4827 scope.go:117] "RemoveContainer" containerID="1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.140662 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v7qpk_openshift-multus(e83a7bed-4909-4830-89e5-13c9a0bfcaf6)\"" pod="openshift-multus/multus-v7qpk" podUID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.176215 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.197087 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.202609 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9xkm"] Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.207015 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9xkm"] Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.213247 4827 scope.go:117] "RemoveContainer" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.234155 4827 scope.go:117] "RemoveContainer" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: W0126 09:18:06.243724 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod777295a4_0611_42eb_be36_2cb975d1d29d.slice/crio-96ea0b32d9483319e78a347e2c1c178c7c41ef9269d2f342b58a36ee278bef14 WatchSource:0}: Error finding container 96ea0b32d9483319e78a347e2c1c178c7c41ef9269d2f342b58a36ee278bef14: Status 404 returned error can't find the container with id 96ea0b32d9483319e78a347e2c1c178c7c41ef9269d2f342b58a36ee278bef14 Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.254122 4827 scope.go:117] "RemoveContainer" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.276052 4827 scope.go:117] "RemoveContainer" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.297904 4827 scope.go:117] "RemoveContainer" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.316997 4827 scope.go:117] "RemoveContainer" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.334327 4827 scope.go:117] "RemoveContainer" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.359766 4827 scope.go:117] "RemoveContainer" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.380800 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.382354 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.382396 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} err="failed to get container status \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.382423 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.382697 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": container with ID starting with 334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa not found: ID does not exist" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.382727 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} err="failed to get container status \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": rpc error: code = NotFound desc = could not find container \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": container with ID starting with 334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.382746 4827 scope.go:117] "RemoveContainer" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.382996 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": container with ID starting with 3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819 not found: ID does not exist" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.383027 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} err="failed to get container status \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": rpc error: code = NotFound desc = could not find container \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": container with ID starting with 3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.383057 4827 scope.go:117] "RemoveContainer" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.383314 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": container with ID starting with 2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2 not found: ID does not exist" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.383349 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} err="failed to get container status \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": rpc error: code = NotFound desc = could not find container \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": container with ID starting with 2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.383374 4827 scope.go:117] "RemoveContainer" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.384815 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": container with ID starting with 03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574 not found: ID does not exist" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.384855 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} err="failed to get container status \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": rpc error: code = NotFound desc = could not find container \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": container with ID starting with 03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.384883 4827 scope.go:117] "RemoveContainer" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.385099 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": container with ID starting with 6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e not found: ID does not exist" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385137 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} err="failed to get container status \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": rpc error: code = NotFound desc = could not find container \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": container with ID starting with 6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385151 4827 scope.go:117] "RemoveContainer" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.385315 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": container with ID starting with 31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1 not found: ID does not exist" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385339 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} err="failed to get container status \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": rpc error: code = NotFound desc = could not find container \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": container with ID starting with 31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385353 4827 scope.go:117] "RemoveContainer" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.385550 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": container with ID starting with dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de not found: ID does not exist" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385568 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} err="failed to get container status \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": rpc error: code = NotFound desc = could not find container \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": container with ID starting with dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385583 4827 scope.go:117] "RemoveContainer" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.385821 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": container with ID starting with 27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0 not found: ID does not exist" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385844 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} err="failed to get container status \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": rpc error: code = NotFound desc = could not find container \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": container with ID starting with 27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.385862 4827 scope.go:117] "RemoveContainer" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: E0126 09:18:06.386098 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": container with ID starting with 5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726 not found: ID does not exist" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386124 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} err="failed to get container status \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": rpc error: code = NotFound desc = could not find container \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": container with ID starting with 5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386144 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386375 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} err="failed to get container status \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386395 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386605 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} err="failed to get container status \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": rpc error: code = NotFound desc = could not find container \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": container with ID starting with 334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386622 4827 scope.go:117] "RemoveContainer" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386847 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} err="failed to get container status \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": rpc error: code = NotFound desc = could not find container \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": container with ID starting with 3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.386874 4827 scope.go:117] "RemoveContainer" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387037 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} err="failed to get container status \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": rpc error: code = NotFound desc = could not find container \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": container with ID starting with 2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387058 4827 scope.go:117] "RemoveContainer" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387263 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} err="failed to get container status \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": rpc error: code = NotFound desc = could not find container \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": container with ID starting with 03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387284 4827 scope.go:117] "RemoveContainer" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387475 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} err="failed to get container status \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": rpc error: code = NotFound desc = could not find container \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": container with ID starting with 6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387527 4827 scope.go:117] "RemoveContainer" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387723 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} err="failed to get container status \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": rpc error: code = NotFound desc = could not find container \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": container with ID starting with 31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387741 4827 scope.go:117] "RemoveContainer" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387893 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} err="failed to get container status \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": rpc error: code = NotFound desc = could not find container \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": container with ID starting with dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.387911 4827 scope.go:117] "RemoveContainer" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388051 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} err="failed to get container status \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": rpc error: code = NotFound desc = could not find container \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": container with ID starting with 27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388067 4827 scope.go:117] "RemoveContainer" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388213 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} err="failed to get container status \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": rpc error: code = NotFound desc = could not find container \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": container with ID starting with 5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388229 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388363 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} err="failed to get container status \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388380 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388506 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} err="failed to get container status \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": rpc error: code = NotFound desc = could not find container \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": container with ID starting with 334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388527 4827 scope.go:117] "RemoveContainer" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388784 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} err="failed to get container status \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": rpc error: code = NotFound desc = could not find container \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": container with ID starting with 3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388805 4827 scope.go:117] "RemoveContainer" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.388986 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} err="failed to get container status \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": rpc error: code = NotFound desc = could not find container \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": container with ID starting with 2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389003 4827 scope.go:117] "RemoveContainer" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389144 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} err="failed to get container status \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": rpc error: code = NotFound desc = could not find container \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": container with ID starting with 03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389159 4827 scope.go:117] "RemoveContainer" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389309 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} err="failed to get container status \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": rpc error: code = NotFound desc = could not find container \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": container with ID starting with 6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389326 4827 scope.go:117] "RemoveContainer" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389456 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} err="failed to get container status \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": rpc error: code = NotFound desc = could not find container \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": container with ID starting with 31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389471 4827 scope.go:117] "RemoveContainer" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389669 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} err="failed to get container status \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": rpc error: code = NotFound desc = could not find container \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": container with ID starting with dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389687 4827 scope.go:117] "RemoveContainer" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389843 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} err="failed to get container status \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": rpc error: code = NotFound desc = could not find container \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": container with ID starting with 27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.389858 4827 scope.go:117] "RemoveContainer" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390011 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} err="failed to get container status \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": rpc error: code = NotFound desc = could not find container \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": container with ID starting with 5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390028 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390210 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} err="failed to get container status \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390229 4827 scope.go:117] "RemoveContainer" containerID="334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390404 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa"} err="failed to get container status \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": rpc error: code = NotFound desc = could not find container \"334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa\": container with ID starting with 334a470007355bd8b4edd8f6ba784c68d6d735c890a2a13a9f5f299c416611aa not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390424 4827 scope.go:117] "RemoveContainer" containerID="3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390577 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819"} err="failed to get container status \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": rpc error: code = NotFound desc = could not find container \"3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819\": container with ID starting with 3085d15933b6128661139de7016163c5de189735e3f03c703d3eb0fc4fa7a819 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390595 4827 scope.go:117] "RemoveContainer" containerID="2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390810 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2"} err="failed to get container status \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": rpc error: code = NotFound desc = could not find container \"2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2\": container with ID starting with 2cccc57a87878ef590a037ee30778bc547d3397e4450217760c21bb6fbf811d2 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390833 4827 scope.go:117] "RemoveContainer" containerID="03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.390980 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574"} err="failed to get container status \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": rpc error: code = NotFound desc = could not find container \"03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574\": container with ID starting with 03e8f625ae6cdd7ff94c47c876d7a6fb50916081cca6bf29bffa81b9f33fe574 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391001 4827 scope.go:117] "RemoveContainer" containerID="6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391165 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e"} err="failed to get container status \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": rpc error: code = NotFound desc = could not find container \"6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e\": container with ID starting with 6ee8a446655b114211077f5b250908241c60202b92732b563906d49ea641d38e not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391184 4827 scope.go:117] "RemoveContainer" containerID="31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391325 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1"} err="failed to get container status \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": rpc error: code = NotFound desc = could not find container \"31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1\": container with ID starting with 31671e28f7b85177b7451e98d034c9aaec3fcd549a65fac4d952efd10aaf73d1 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391340 4827 scope.go:117] "RemoveContainer" containerID="dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391475 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de"} err="failed to get container status \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": rpc error: code = NotFound desc = could not find container \"dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de\": container with ID starting with dba2fd7b21495aad24a9ed7b4746db352e139bc29193bb039d3aaa1c3af9a4de not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391497 4827 scope.go:117] "RemoveContainer" containerID="27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391683 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0"} err="failed to get container status \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": rpc error: code = NotFound desc = could not find container \"27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0\": container with ID starting with 27e7116bee11088e8ca38ce0f97184a5f66f81b24f1cd6bf15eef602304e01b0 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391702 4827 scope.go:117] "RemoveContainer" containerID="5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391854 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726"} err="failed to get container status \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": rpc error: code = NotFound desc = could not find container \"5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726\": container with ID starting with 5a899d565676b840a563f72ad1303586dd5e90bc13854d9421fa43b5f5558726 not found: ID does not exist" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.391879 4827 scope.go:117] "RemoveContainer" containerID="4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494" Jan 26 09:18:06 crc kubenswrapper[4827]: I0126 09:18:06.392040 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494"} err="failed to get container status \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": rpc error: code = NotFound desc = could not find container \"4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494\": container with ID starting with 4db58f802937a0ac8d03f599d774f68ae85ac82aa0c2946f3a21ab9b48e8f494 not found: ID does not exist" Jan 26 09:18:07 crc kubenswrapper[4827]: I0126 09:18:07.145074 4827 generic.go:334] "Generic (PLEG): container finished" podID="777295a4-0611-42eb-be36-2cb975d1d29d" containerID="69a39a1ee85e60db77a278bcd6a89149ee309937a2375893c0387caec12c6496" exitCode=0 Jan 26 09:18:07 crc kubenswrapper[4827]: I0126 09:18:07.145208 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerDied","Data":"69a39a1ee85e60db77a278bcd6a89149ee309937a2375893c0387caec12c6496"} Jan 26 09:18:07 crc kubenswrapper[4827]: I0126 09:18:07.145275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"96ea0b32d9483319e78a347e2c1c178c7c41ef9269d2f342b58a36ee278bef14"} Jan 26 09:18:07 crc kubenswrapper[4827]: I0126 09:18:07.708555 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba16376-c20a-411b-b45a-d7e718fbbac0" path="/var/lib/kubelet/pods/3ba16376-c20a-411b-b45a-d7e718fbbac0/volumes" Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.155701 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"6ad7673afe3e35e08c34b9bd6617f9f6f73c3b938d49a6869fa8f3587b899a1d"} Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.156040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"39a1d80bd3505a1546a781ca1bb15cbdb06edefd7f633ba892143a3af4f642af"} Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.156055 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"07329a6f43ce98c6ab21d90c6695a2c72e14a1c90a78bcc9b40ded824a94b550"} Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.156067 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"cdfd6b243ee614e32c505e8176867604791355150ec1fa1cc09429f54a8781c5"} Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.156080 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"ad46be0e47dcb956682d5db022dfcda1f294d4d915eb6eae9a448753feb5307d"} Jan 26 09:18:08 crc kubenswrapper[4827]: I0126 09:18:08.156091 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"28f6c3da7fe4e9ccdd711bd73cdb4f5b16e6ccc8f1f289cf0fcd9b952af07992"} Jan 26 09:18:10 crc kubenswrapper[4827]: I0126 09:18:10.170380 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"bcee0223e58315abdea5aef5f844258d31644b4e1014a4135695424c13232097"} Jan 26 09:18:11 crc kubenswrapper[4827]: I0126 09:18:11.894364 4827 scope.go:117] "RemoveContainer" containerID="b5f6d30ed63bf770d0fcf3146fdc468b4a336230b55edc096d93063cf78ace1a" Jan 26 09:18:12 crc kubenswrapper[4827]: I0126 09:18:12.182491 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/2.log" Jan 26 09:18:12 crc kubenswrapper[4827]: I0126 09:18:12.269474 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:18:12 crc kubenswrapper[4827]: I0126 09:18:12.269791 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.197133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" event={"ID":"777295a4-0611-42eb-be36-2cb975d1d29d","Type":"ContainerStarted","Data":"432438e03e4bc72e16b51120bd2dde3e74b47b281311e7d1781c4c1bee97a7d4"} Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.198822 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.198999 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.199125 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.243203 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.246997 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:13 crc kubenswrapper[4827]: I0126 09:18:13.267946 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" podStartSLOduration=8.267922361 podStartE2EDuration="8.267922361s" podCreationTimestamp="2026-01-26 09:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:18:13.25356485 +0000 UTC m=+721.902236679" watchObservedRunningTime="2026-01-26 09:18:13.267922361 +0000 UTC m=+721.916594220" Jan 26 09:18:18 crc kubenswrapper[4827]: I0126 09:18:18.703248 4827 scope.go:117] "RemoveContainer" containerID="1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d" Jan 26 09:18:18 crc kubenswrapper[4827]: E0126 09:18:18.704334 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v7qpk_openshift-multus(e83a7bed-4909-4830-89e5-13c9a0bfcaf6)\"" pod="openshift-multus/multus-v7qpk" podUID="e83a7bed-4909-4830-89e5-13c9a0bfcaf6" Jan 26 09:18:32 crc kubenswrapper[4827]: I0126 09:18:32.703609 4827 scope.go:117] "RemoveContainer" containerID="1a62d8e64ac48c4def0edb2f15532c992d6cd4065df6ebacb2839c194b02b43d" Jan 26 09:18:33 crc kubenswrapper[4827]: I0126 09:18:33.315176 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v7qpk_e83a7bed-4909-4830-89e5-13c9a0bfcaf6/kube-multus/2.log" Jan 26 09:18:33 crc kubenswrapper[4827]: I0126 09:18:33.315443 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v7qpk" event={"ID":"e83a7bed-4909-4830-89e5-13c9a0bfcaf6","Type":"ContainerStarted","Data":"6265cf1bedd245cf7d7d0d85a14f0a2521873061a0a0b2a445b39123204ffcd6"} Jan 26 09:18:36 crc kubenswrapper[4827]: I0126 09:18:36.221885 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj7pn" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.653567 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68"] Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.655610 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.657876 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.671363 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68"] Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.834366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.834440 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxkfb\" (UniqueName: \"kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.834481 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.935448 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.935956 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxkfb\" (UniqueName: \"kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.936163 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.936411 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.937200 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.971281 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxkfb\" (UniqueName: \"kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:38 crc kubenswrapper[4827]: I0126 09:18:38.972336 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:39 crc kubenswrapper[4827]: I0126 09:18:39.379221 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68"] Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.356657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerStarted","Data":"341170bf98e2dc149d837171bd121685fa35189095047ee8f7b0121dff757a8a"} Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.357032 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerStarted","Data":"7d0f707bbc390e8f8ae7d0c54ea2d006a08a6a79602d7f58c0ee440b0f5ea88c"} Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.469127 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.470413 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.487853 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.663901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.663960 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.664097 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqn9\" (UniqueName: \"kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.765270 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqn9\" (UniqueName: \"kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.765340 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.765392 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.766344 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.766482 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:40 crc kubenswrapper[4827]: I0126 09:18:40.789077 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqn9\" (UniqueName: \"kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9\") pod \"redhat-operators-lgwz5\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:41 crc kubenswrapper[4827]: I0126 09:18:41.087207 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:41 crc kubenswrapper[4827]: I0126 09:18:41.364582 4827 generic.go:334] "Generic (PLEG): container finished" podID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerID="341170bf98e2dc149d837171bd121685fa35189095047ee8f7b0121dff757a8a" exitCode=0 Jan 26 09:18:41 crc kubenswrapper[4827]: I0126 09:18:41.364857 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerDied","Data":"341170bf98e2dc149d837171bd121685fa35189095047ee8f7b0121dff757a8a"} Jan 26 09:18:41 crc kubenswrapper[4827]: I0126 09:18:41.375102 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:42 crc kubenswrapper[4827]: I0126 09:18:42.268714 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:18:42 crc kubenswrapper[4827]: I0126 09:18:42.269064 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:18:42 crc kubenswrapper[4827]: I0126 09:18:42.369554 4827 generic.go:334] "Generic (PLEG): container finished" podID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerID="5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9" exitCode=0 Jan 26 09:18:42 crc kubenswrapper[4827]: I0126 09:18:42.369591 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerDied","Data":"5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9"} Jan 26 09:18:42 crc kubenswrapper[4827]: I0126 09:18:42.369614 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerStarted","Data":"e1468c791128eb58f0e02eed96c19d8472e17fa7e5ce5e7aae2a45efe458ad6b"} Jan 26 09:18:43 crc kubenswrapper[4827]: I0126 09:18:43.017338 4827 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 09:18:43 crc kubenswrapper[4827]: I0126 09:18:43.377040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerStarted","Data":"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e"} Jan 26 09:18:43 crc kubenswrapper[4827]: I0126 09:18:43.378867 4827 generic.go:334] "Generic (PLEG): container finished" podID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerID="2c98f659d5f0b0fc83eaf257609b7c1913c59574a3f38fdfc563be0182d02e7a" exitCode=0 Jan 26 09:18:43 crc kubenswrapper[4827]: I0126 09:18:43.378997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerDied","Data":"2c98f659d5f0b0fc83eaf257609b7c1913c59574a3f38fdfc563be0182d02e7a"} Jan 26 09:18:44 crc kubenswrapper[4827]: I0126 09:18:44.388234 4827 generic.go:334] "Generic (PLEG): container finished" podID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerID="d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e" exitCode=0 Jan 26 09:18:44 crc kubenswrapper[4827]: I0126 09:18:44.388293 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerDied","Data":"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e"} Jan 26 09:18:44 crc kubenswrapper[4827]: I0126 09:18:44.390570 4827 generic.go:334] "Generic (PLEG): container finished" podID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerID="cba2422ea4e026100111366140ad9d3928ee486f96c2bb46c071ef34851939a4" exitCode=0 Jan 26 09:18:44 crc kubenswrapper[4827]: I0126 09:18:44.390598 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerDied","Data":"cba2422ea4e026100111366140ad9d3928ee486f96c2bb46c071ef34851939a4"} Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.401876 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerStarted","Data":"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada"} Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.431764 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lgwz5" podStartSLOduration=3.002527476 podStartE2EDuration="5.431741027s" podCreationTimestamp="2026-01-26 09:18:40 +0000 UTC" firstStartedPulling="2026-01-26 09:18:42.390165635 +0000 UTC m=+751.038837454" lastFinishedPulling="2026-01-26 09:18:44.819379166 +0000 UTC m=+753.468051005" observedRunningTime="2026-01-26 09:18:45.427374916 +0000 UTC m=+754.076046775" watchObservedRunningTime="2026-01-26 09:18:45.431741027 +0000 UTC m=+754.080412856" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.668411 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.844447 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxkfb\" (UniqueName: \"kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb\") pod \"d9cafcc1-be7a-4449-b34d-8307959c4608\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.844509 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util\") pod \"d9cafcc1-be7a-4449-b34d-8307959c4608\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.844558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle\") pod \"d9cafcc1-be7a-4449-b34d-8307959c4608\" (UID: \"d9cafcc1-be7a-4449-b34d-8307959c4608\") " Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.845109 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle" (OuterVolumeSpecName: "bundle") pod "d9cafcc1-be7a-4449-b34d-8307959c4608" (UID: "d9cafcc1-be7a-4449-b34d-8307959c4608"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.854049 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb" (OuterVolumeSpecName: "kube-api-access-fxkfb") pod "d9cafcc1-be7a-4449-b34d-8307959c4608" (UID: "d9cafcc1-be7a-4449-b34d-8307959c4608"). InnerVolumeSpecName "kube-api-access-fxkfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.863017 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util" (OuterVolumeSpecName: "util") pod "d9cafcc1-be7a-4449-b34d-8307959c4608" (UID: "d9cafcc1-be7a-4449-b34d-8307959c4608"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.945909 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxkfb\" (UniqueName: \"kubernetes.io/projected/d9cafcc1-be7a-4449-b34d-8307959c4608-kube-api-access-fxkfb\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.945966 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-util\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:45 crc kubenswrapper[4827]: I0126 09:18:45.945984 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9cafcc1-be7a-4449-b34d-8307959c4608-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:46 crc kubenswrapper[4827]: I0126 09:18:46.412434 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" event={"ID":"d9cafcc1-be7a-4449-b34d-8307959c4608","Type":"ContainerDied","Data":"7d0f707bbc390e8f8ae7d0c54ea2d006a08a6a79602d7f58c0ee440b0f5ea88c"} Jan 26 09:18:46 crc kubenswrapper[4827]: I0126 09:18:46.412487 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0f707bbc390e8f8ae7d0c54ea2d006a08a6a79602d7f58c0ee440b0f5ea88c" Jan 26 09:18:46 crc kubenswrapper[4827]: I0126 09:18:46.412490 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.586012 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d5ltb"] Jan 26 09:18:49 crc kubenswrapper[4827]: E0126 09:18:49.586576 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="pull" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.586593 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="pull" Jan 26 09:18:49 crc kubenswrapper[4827]: E0126 09:18:49.586651 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="util" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.586660 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="util" Jan 26 09:18:49 crc kubenswrapper[4827]: E0126 09:18:49.586672 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="extract" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.586679 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="extract" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.586817 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cafcc1-be7a-4449-b34d-8307959c4608" containerName="extract" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.587315 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.589530 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.590857 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.593327 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2fzm7" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.629052 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d5ltb"] Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.696815 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4r5f\" (UniqueName: \"kubernetes.io/projected/de78c189-6378-4709-8f64-c4ec5c433064-kube-api-access-j4r5f\") pod \"nmstate-operator-646758c888-d5ltb\" (UID: \"de78c189-6378-4709-8f64-c4ec5c433064\") " pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.797713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4r5f\" (UniqueName: \"kubernetes.io/projected/de78c189-6378-4709-8f64-c4ec5c433064-kube-api-access-j4r5f\") pod \"nmstate-operator-646758c888-d5ltb\" (UID: \"de78c189-6378-4709-8f64-c4ec5c433064\") " pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.821423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4r5f\" (UniqueName: \"kubernetes.io/projected/de78c189-6378-4709-8f64-c4ec5c433064-kube-api-access-j4r5f\") pod \"nmstate-operator-646758c888-d5ltb\" (UID: \"de78c189-6378-4709-8f64-c4ec5c433064\") " pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" Jan 26 09:18:49 crc kubenswrapper[4827]: I0126 09:18:49.906438 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" Jan 26 09:18:50 crc kubenswrapper[4827]: I0126 09:18:50.170255 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d5ltb"] Jan 26 09:18:50 crc kubenswrapper[4827]: I0126 09:18:50.429794 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" event={"ID":"de78c189-6378-4709-8f64-c4ec5c433064","Type":"ContainerStarted","Data":"e6f14b4fce5e7383dbfa5677c266a40632cae2e374fd1d06fc0d3887b9243520"} Jan 26 09:18:51 crc kubenswrapper[4827]: I0126 09:18:51.088172 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:51 crc kubenswrapper[4827]: I0126 09:18:51.088545 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:51 crc kubenswrapper[4827]: I0126 09:18:51.130742 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:51 crc kubenswrapper[4827]: I0126 09:18:51.473558 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:53 crc kubenswrapper[4827]: I0126 09:18:53.465162 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.452554 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" event={"ID":"de78c189-6378-4709-8f64-c4ec5c433064","Type":"ContainerStarted","Data":"26c88dad0c9ff37b0488031be2cd66daff43c9adcca92ae02114aa321d41f988"} Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.452758 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lgwz5" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="registry-server" containerID="cri-o://4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada" gracePeriod=2 Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.481725 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-d5ltb" podStartSLOduration=1.716466198 podStartE2EDuration="5.481705358s" podCreationTimestamp="2026-01-26 09:18:49 +0000 UTC" firstStartedPulling="2026-01-26 09:18:50.181027054 +0000 UTC m=+758.829698873" lastFinishedPulling="2026-01-26 09:18:53.946266214 +0000 UTC m=+762.594938033" observedRunningTime="2026-01-26 09:18:54.478242732 +0000 UTC m=+763.126914561" watchObservedRunningTime="2026-01-26 09:18:54.481705358 +0000 UTC m=+763.130377187" Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.829778 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.961258 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities\") pod \"397a82ed-23ab-4fa6-96db-e455839afc8b\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.961383 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jqn9\" (UniqueName: \"kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9\") pod \"397a82ed-23ab-4fa6-96db-e455839afc8b\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.961482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content\") pod \"397a82ed-23ab-4fa6-96db-e455839afc8b\" (UID: \"397a82ed-23ab-4fa6-96db-e455839afc8b\") " Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.963454 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities" (OuterVolumeSpecName: "utilities") pod "397a82ed-23ab-4fa6-96db-e455839afc8b" (UID: "397a82ed-23ab-4fa6-96db-e455839afc8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:18:54 crc kubenswrapper[4827]: I0126 09:18:54.969029 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9" (OuterVolumeSpecName: "kube-api-access-2jqn9") pod "397a82ed-23ab-4fa6-96db-e455839afc8b" (UID: "397a82ed-23ab-4fa6-96db-e455839afc8b"). InnerVolumeSpecName "kube-api-access-2jqn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.063773 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.063809 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jqn9\" (UniqueName: \"kubernetes.io/projected/397a82ed-23ab-4fa6-96db-e455839afc8b-kube-api-access-2jqn9\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.086102 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "397a82ed-23ab-4fa6-96db-e455839afc8b" (UID: "397a82ed-23ab-4fa6-96db-e455839afc8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.164446 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397a82ed-23ab-4fa6-96db-e455839afc8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.461466 4827 generic.go:334] "Generic (PLEG): container finished" podID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerID="4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada" exitCode=0 Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.461538 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerDied","Data":"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada"} Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.461551 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgwz5" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.461572 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgwz5" event={"ID":"397a82ed-23ab-4fa6-96db-e455839afc8b","Type":"ContainerDied","Data":"e1468c791128eb58f0e02eed96c19d8472e17fa7e5ce5e7aae2a45efe458ad6b"} Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.461609 4827 scope.go:117] "RemoveContainer" containerID="4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.488598 4827 scope.go:117] "RemoveContainer" containerID="d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.511879 4827 scope.go:117] "RemoveContainer" containerID="5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.520512 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.536429 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lgwz5"] Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.539040 4827 scope.go:117] "RemoveContainer" containerID="4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada" Jan 26 09:18:55 crc kubenswrapper[4827]: E0126 09:18:55.540047 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada\": container with ID starting with 4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada not found: ID does not exist" containerID="4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.540086 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada"} err="failed to get container status \"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada\": rpc error: code = NotFound desc = could not find container \"4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada\": container with ID starting with 4643fd23b7a786ce4a8c34803eadbd3f24fdeea3d620952f41fc1a4da60caada not found: ID does not exist" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.540111 4827 scope.go:117] "RemoveContainer" containerID="d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e" Jan 26 09:18:55 crc kubenswrapper[4827]: E0126 09:18:55.541339 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e\": container with ID starting with d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e not found: ID does not exist" containerID="d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.541372 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e"} err="failed to get container status \"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e\": rpc error: code = NotFound desc = could not find container \"d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e\": container with ID starting with d2f7cbefd1a8b1531b525a69fecb9c76dd480a373061b88e60f842ca694a3b3e not found: ID does not exist" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.541396 4827 scope.go:117] "RemoveContainer" containerID="5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9" Jan 26 09:18:55 crc kubenswrapper[4827]: E0126 09:18:55.542055 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9\": container with ID starting with 5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9 not found: ID does not exist" containerID="5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.542091 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9"} err="failed to get container status \"5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9\": rpc error: code = NotFound desc = could not find container \"5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9\": container with ID starting with 5123227a383b10b09ddbaa2a7d92cb6ac565ac4593c07ce86e5a98926d6ecaa9 not found: ID does not exist" Jan 26 09:18:55 crc kubenswrapper[4827]: I0126 09:18:55.715798 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" path="/var/lib/kubelet/pods/397a82ed-23ab-4fa6-96db-e455839afc8b/volumes" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.228271 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wkvlg"] Jan 26 09:18:59 crc kubenswrapper[4827]: E0126 09:18:59.228998 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="extract-content" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.229021 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="extract-content" Jan 26 09:18:59 crc kubenswrapper[4827]: E0126 09:18:59.229042 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="registry-server" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.229054 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="registry-server" Jan 26 09:18:59 crc kubenswrapper[4827]: E0126 09:18:59.229079 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="extract-utilities" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.229092 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="extract-utilities" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.229283 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="397a82ed-23ab-4fa6-96db-e455839afc8b" containerName="registry-server" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.230230 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.234492 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sgftm" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.262745 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lbplf"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.263357 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.277574 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.278190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.281701 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.318026 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.322723 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8fbg\" (UniqueName: \"kubernetes.io/projected/fc6481c7-2911-4068-9e79-b44f492beda6-kube-api-access-h8fbg\") pod \"nmstate-metrics-54757c584b-wkvlg\" (UID: \"fc6481c7-2911-4068-9e79-b44f492beda6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.328804 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wkvlg"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.385836 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.386778 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.388512 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.389435 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zqq5t" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.392116 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.403650 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423600 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-nmstate-lock\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423664 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5tl\" (UniqueName: \"kubernetes.io/projected/63aaa24b-8f3f-426f-910b-65c0a0fa9429-kube-api-access-rh5tl\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423689 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-dbus-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423723 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-ovs-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423743 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c0175eb7-29d3-4293-ab63-f5db59a1092b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423774 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmk4\" (UniqueName: \"kubernetes.io/projected/c0175eb7-29d3-4293-ab63-f5db59a1092b-kube-api-access-ftmk4\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.423816 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8fbg\" (UniqueName: \"kubernetes.io/projected/fc6481c7-2911-4068-9e79-b44f492beda6-kube-api-access-h8fbg\") pod \"nmstate-metrics-54757c584b-wkvlg\" (UID: \"fc6481c7-2911-4068-9e79-b44f492beda6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.455887 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8fbg\" (UniqueName: \"kubernetes.io/projected/fc6481c7-2911-4068-9e79-b44f492beda6-kube-api-access-h8fbg\") pod \"nmstate-metrics-54757c584b-wkvlg\" (UID: \"fc6481c7-2911-4068-9e79-b44f492beda6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525372 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7988cfe9-a182-49bf-b821-06d94fb81ec5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525435 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n9l\" (UniqueName: \"kubernetes.io/projected/7988cfe9-a182-49bf-b821-06d94fb81ec5-kube-api-access-t9n9l\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525473 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-ovs-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c0175eb7-29d3-4293-ab63-f5db59a1092b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525536 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmk4\" (UniqueName: \"kubernetes.io/projected/c0175eb7-29d3-4293-ab63-f5db59a1092b-kube-api-access-ftmk4\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525588 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-ovs-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525627 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-nmstate-lock\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525680 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5tl\" (UniqueName: \"kubernetes.io/projected/63aaa24b-8f3f-426f-910b-65c0a0fa9429-kube-api-access-rh5tl\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525710 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-dbus-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.525942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-nmstate-lock\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.526043 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63aaa24b-8f3f-426f-910b-65c0a0fa9429-dbus-socket\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.530629 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c0175eb7-29d3-4293-ab63-f5db59a1092b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.551332 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.551994 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5tl\" (UniqueName: \"kubernetes.io/projected/63aaa24b-8f3f-426f-910b-65c0a0fa9429-kube-api-access-rh5tl\") pod \"nmstate-handler-lbplf\" (UID: \"63aaa24b-8f3f-426f-910b-65c0a0fa9429\") " pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.554713 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmk4\" (UniqueName: \"kubernetes.io/projected/c0175eb7-29d3-4293-ab63-f5db59a1092b-kube-api-access-ftmk4\") pod \"nmstate-webhook-8474b5b9d8-vp6lz\" (UID: \"c0175eb7-29d3-4293-ab63-f5db59a1092b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.582375 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.594915 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.624196 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c76489864-kx6b4"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.625054 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.626434 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7988cfe9-a182-49bf-b821-06d94fb81ec5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.626486 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9n9l\" (UniqueName: \"kubernetes.io/projected/7988cfe9-a182-49bf-b821-06d94fb81ec5-kube-api-access-t9n9l\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.626541 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: E0126 09:18:59.626664 4827 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 09:18:59 crc kubenswrapper[4827]: E0126 09:18:59.626711 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert podName:7988cfe9-a182-49bf-b821-06d94fb81ec5 nodeName:}" failed. No retries permitted until 2026-01-26 09:19:00.126695359 +0000 UTC m=+768.775367178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-cx5k7" (UID: "7988cfe9-a182-49bf-b821-06d94fb81ec5") : secret "plugin-serving-cert" not found Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.627991 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7988cfe9-a182-49bf-b821-06d94fb81ec5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: W0126 09:18:59.643982 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63aaa24b_8f3f_426f_910b_65c0a0fa9429.slice/crio-95649616d0d5d690037d2d9f4720e5f6b83dba599ae29487be368ffc70683906 WatchSource:0}: Error finding container 95649616d0d5d690037d2d9f4720e5f6b83dba599ae29487be368ffc70683906: Status 404 returned error can't find the container with id 95649616d0d5d690037d2d9f4720e5f6b83dba599ae29487be368ffc70683906 Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.652294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-kx6b4"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.676738 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9n9l\" (UniqueName: \"kubernetes.io/projected/7988cfe9-a182-49bf-b821-06d94fb81ec5-kube-api-access-t9n9l\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.729398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.729494 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-trusted-ca-bundle\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.729518 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrfg\" (UniqueName: \"kubernetes.io/projected/768316a4-7e21-4b4a-b826-ce6334104211-kube-api-access-qkrfg\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.729540 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-service-ca\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.729722 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-console-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.730395 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-oauth-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.730446 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-oauth-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.833249 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wkvlg"] Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834285 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-service-ca\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834357 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-console-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834418 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-oauth-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834440 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-oauth-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834629 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834720 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-trusted-ca-bundle\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.834747 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrfg\" (UniqueName: \"kubernetes.io/projected/768316a4-7e21-4b4a-b826-ce6334104211-kube-api-access-qkrfg\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.835928 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-oauth-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.836808 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-console-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.839019 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-service-ca\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.839864 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768316a4-7e21-4b4a-b826-ce6334104211-trusted-ca-bundle\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.843164 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-serving-cert\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.846153 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/768316a4-7e21-4b4a-b826-ce6334104211-console-oauth-config\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.856965 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrfg\" (UniqueName: \"kubernetes.io/projected/768316a4-7e21-4b4a-b826-ce6334104211-kube-api-access-qkrfg\") pod \"console-c76489864-kx6b4\" (UID: \"768316a4-7e21-4b4a-b826-ce6334104211\") " pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.910090 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz"] Jan 26 09:18:59 crc kubenswrapper[4827]: W0126 09:18:59.912847 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0175eb7_29d3_4293_ab63_f5db59a1092b.slice/crio-c194ad1d58c3634dd604940d5296ff71b4b532f8e97d8e1be3a6bd1aa0671e0b WatchSource:0}: Error finding container c194ad1d58c3634dd604940d5296ff71b4b532f8e97d8e1be3a6bd1aa0671e0b: Status 404 returned error can't find the container with id c194ad1d58c3634dd604940d5296ff71b4b532f8e97d8e1be3a6bd1aa0671e0b Jan 26 09:18:59 crc kubenswrapper[4827]: I0126 09:18:59.954274 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.137667 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.139236 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-kx6b4"] Jan 26 09:19:00 crc kubenswrapper[4827]: W0126 09:19:00.139947 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod768316a4_7e21_4b4a_b826_ce6334104211.slice/crio-072283b26c9de03a4a5c2e89d5cddbc6058c0f7283893a0573473e72dc193150 WatchSource:0}: Error finding container 072283b26c9de03a4a5c2e89d5cddbc6058c0f7283893a0573473e72dc193150: Status 404 returned error can't find the container with id 072283b26c9de03a4a5c2e89d5cddbc6058c0f7283893a0573473e72dc193150 Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.141892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7988cfe9-a182-49bf-b821-06d94fb81ec5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cx5k7\" (UID: \"7988cfe9-a182-49bf-b821-06d94fb81ec5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.305128 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.493172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" event={"ID":"c0175eb7-29d3-4293-ab63-f5db59a1092b","Type":"ContainerStarted","Data":"c194ad1d58c3634dd604940d5296ff71b4b532f8e97d8e1be3a6bd1aa0671e0b"} Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.494134 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" event={"ID":"fc6481c7-2911-4068-9e79-b44f492beda6","Type":"ContainerStarted","Data":"075b7e28a697f5f37e543311dc6f086c6e27aa1f16b8967c64a128964a481cb9"} Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.495465 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-kx6b4" event={"ID":"768316a4-7e21-4b4a-b826-ce6334104211","Type":"ContainerStarted","Data":"55149ecefd5ca4c8713c923a1c5c9dea677fe62f3d99eb3e4d5447337b0b76c1"} Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.495490 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-kx6b4" event={"ID":"768316a4-7e21-4b4a-b826-ce6334104211","Type":"ContainerStarted","Data":"072283b26c9de03a4a5c2e89d5cddbc6058c0f7283893a0573473e72dc193150"} Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.497446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbplf" event={"ID":"63aaa24b-8f3f-426f-910b-65c0a0fa9429","Type":"ContainerStarted","Data":"95649616d0d5d690037d2d9f4720e5f6b83dba599ae29487be368ffc70683906"} Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.515266 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c76489864-kx6b4" podStartSLOduration=1.515239859 podStartE2EDuration="1.515239859s" podCreationTimestamp="2026-01-26 09:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:19:00.512587795 +0000 UTC m=+769.161259624" watchObservedRunningTime="2026-01-26 09:19:00.515239859 +0000 UTC m=+769.163911678" Jan 26 09:19:00 crc kubenswrapper[4827]: I0126 09:19:00.529251 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7"] Jan 26 09:19:00 crc kubenswrapper[4827]: W0126 09:19:00.532286 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7988cfe9_a182_49bf_b821_06d94fb81ec5.slice/crio-f0796f6bc838fbb112dfa67fc81b2f76b3fb894f7758cf10b0e1e5e78d0da4fe WatchSource:0}: Error finding container f0796f6bc838fbb112dfa67fc81b2f76b3fb894f7758cf10b0e1e5e78d0da4fe: Status 404 returned error can't find the container with id f0796f6bc838fbb112dfa67fc81b2f76b3fb894f7758cf10b0e1e5e78d0da4fe Jan 26 09:19:01 crc kubenswrapper[4827]: I0126 09:19:01.504093 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" event={"ID":"7988cfe9-a182-49bf-b821-06d94fb81ec5","Type":"ContainerStarted","Data":"f0796f6bc838fbb112dfa67fc81b2f76b3fb894f7758cf10b0e1e5e78d0da4fe"} Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.515823 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" event={"ID":"fc6481c7-2911-4068-9e79-b44f492beda6","Type":"ContainerStarted","Data":"e1789431347d815832bd9b9e2aa9c683ba1cfeb644ecf3cca44adb8f177d2574"} Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.518607 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbplf" event={"ID":"63aaa24b-8f3f-426f-910b-65c0a0fa9429","Type":"ContainerStarted","Data":"5c4413ba90c41af540618cc86ef1366d1205f7c09ddc71066f58fb561282499f"} Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.519685 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.521517 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" event={"ID":"c0175eb7-29d3-4293-ab63-f5db59a1092b","Type":"ContainerStarted","Data":"737ed2d203f5b7cd799d10d631c1d220ed6c14f34b7edae9ee5729343cae16e9"} Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.521962 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.545230 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lbplf" podStartSLOduration=1.857925863 podStartE2EDuration="4.545204676s" podCreationTimestamp="2026-01-26 09:18:59 +0000 UTC" firstStartedPulling="2026-01-26 09:18:59.655985256 +0000 UTC m=+768.304657075" lastFinishedPulling="2026-01-26 09:19:02.343264069 +0000 UTC m=+770.991935888" observedRunningTime="2026-01-26 09:19:03.535384643 +0000 UTC m=+772.184056462" watchObservedRunningTime="2026-01-26 09:19:03.545204676 +0000 UTC m=+772.193876495" Jan 26 09:19:03 crc kubenswrapper[4827]: I0126 09:19:03.556867 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" podStartSLOduration=2.108849526 podStartE2EDuration="4.556843701s" podCreationTimestamp="2026-01-26 09:18:59 +0000 UTC" firstStartedPulling="2026-01-26 09:18:59.914311006 +0000 UTC m=+768.562982825" lastFinishedPulling="2026-01-26 09:19:02.362305181 +0000 UTC m=+771.010977000" observedRunningTime="2026-01-26 09:19:03.554927888 +0000 UTC m=+772.203599707" watchObservedRunningTime="2026-01-26 09:19:03.556843701 +0000 UTC m=+772.205515520" Jan 26 09:19:04 crc kubenswrapper[4827]: I0126 09:19:04.529421 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" event={"ID":"7988cfe9-a182-49bf-b821-06d94fb81ec5","Type":"ContainerStarted","Data":"149dc1879d20ce3d54d30d1036e1ff24116293e018565e7b6a8bb1f144dcceac"} Jan 26 09:19:04 crc kubenswrapper[4827]: I0126 09:19:04.547920 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cx5k7" podStartSLOduration=2.315685409 podStartE2EDuration="5.547902332s" podCreationTimestamp="2026-01-26 09:18:59 +0000 UTC" firstStartedPulling="2026-01-26 09:19:00.534341682 +0000 UTC m=+769.183013491" lastFinishedPulling="2026-01-26 09:19:03.766558575 +0000 UTC m=+772.415230414" observedRunningTime="2026-01-26 09:19:04.545106694 +0000 UTC m=+773.193778513" watchObservedRunningTime="2026-01-26 09:19:04.547902332 +0000 UTC m=+773.196574151" Jan 26 09:19:05 crc kubenswrapper[4827]: I0126 09:19:05.536693 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" event={"ID":"fc6481c7-2911-4068-9e79-b44f492beda6","Type":"ContainerStarted","Data":"3aacbceca951447a83be1295d8642a5af90305e381d4c45b3dec324c34d30d89"} Jan 26 09:19:05 crc kubenswrapper[4827]: I0126 09:19:05.559387 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-wkvlg" podStartSLOduration=1.4464647990000001 podStartE2EDuration="6.559332942s" podCreationTimestamp="2026-01-26 09:18:59 +0000 UTC" firstStartedPulling="2026-01-26 09:18:59.846813632 +0000 UTC m=+768.495485451" lastFinishedPulling="2026-01-26 09:19:04.959681775 +0000 UTC m=+773.608353594" observedRunningTime="2026-01-26 09:19:05.552868572 +0000 UTC m=+774.201540431" watchObservedRunningTime="2026-01-26 09:19:05.559332942 +0000 UTC m=+774.208004781" Jan 26 09:19:09 crc kubenswrapper[4827]: I0126 09:19:09.607028 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lbplf" Jan 26 09:19:09 crc kubenswrapper[4827]: I0126 09:19:09.955340 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:19:09 crc kubenswrapper[4827]: I0126 09:19:09.955401 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:19:09 crc kubenswrapper[4827]: I0126 09:19:09.963209 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:19:10 crc kubenswrapper[4827]: I0126 09:19:10.576386 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c76489864-kx6b4" Jan 26 09:19:10 crc kubenswrapper[4827]: I0126 09:19:10.663460 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.269364 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.269872 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.269939 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.270804 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.270881 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca" gracePeriod=600 Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.586360 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca" exitCode=0 Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.586411 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca"} Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.586441 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e"} Jan 26 09:19:12 crc kubenswrapper[4827]: I0126 09:19:12.586457 4827 scope.go:117] "RemoveContainer" containerID="07419d5ffeb9e01f78bef452de4e0f1d26ff67f6df0a2b67a252504eedf2a784" Jan 26 09:19:19 crc kubenswrapper[4827]: I0126 09:19:19.603313 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vp6lz" Jan 26 09:19:31 crc kubenswrapper[4827]: I0126 09:19:31.856779 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k"] Jan 26 09:19:31 crc kubenswrapper[4827]: I0126 09:19:31.858600 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:31 crc kubenswrapper[4827]: I0126 09:19:31.860845 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 09:19:31 crc kubenswrapper[4827]: I0126 09:19:31.870531 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k"] Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.024993 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmml\" (UniqueName: \"kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.025082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.025128 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.125782 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmml\" (UniqueName: \"kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.125858 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.125905 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.126342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.126368 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.149186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmml\" (UniqueName: \"kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.180087 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.406247 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k"] Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.707907 4827 generic.go:334] "Generic (PLEG): container finished" podID="da2a9099-cc10-4968-887e-ca1d997b172c" containerID="8af5b158c89055c3dd77c5e1ae6a5836dac6038084f503508a963b80c1eda21c" exitCode=0 Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.707954 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" event={"ID":"da2a9099-cc10-4968-887e-ca1d997b172c","Type":"ContainerDied","Data":"8af5b158c89055c3dd77c5e1ae6a5836dac6038084f503508a963b80c1eda21c"} Jan 26 09:19:32 crc kubenswrapper[4827]: I0126 09:19:32.707979 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" event={"ID":"da2a9099-cc10-4968-887e-ca1d997b172c","Type":"ContainerStarted","Data":"7be8c6ac9690845a2fd702f28edea337f98a61902e096eb6df6b80efa42067b2"} Jan 26 09:19:34 crc kubenswrapper[4827]: I0126 09:19:34.721773 4827 generic.go:334] "Generic (PLEG): container finished" podID="da2a9099-cc10-4968-887e-ca1d997b172c" containerID="4384c426b58384162a6aa7daec012e009f9ee846da5ff94d3b5664ed62554471" exitCode=0 Jan 26 09:19:34 crc kubenswrapper[4827]: I0126 09:19:34.721865 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" event={"ID":"da2a9099-cc10-4968-887e-ca1d997b172c","Type":"ContainerDied","Data":"4384c426b58384162a6aa7daec012e009f9ee846da5ff94d3b5664ed62554471"} Jan 26 09:19:35 crc kubenswrapper[4827]: I0126 09:19:35.730427 4827 generic.go:334] "Generic (PLEG): container finished" podID="da2a9099-cc10-4968-887e-ca1d997b172c" containerID="3dee9fde78c58de36e2a657a5c813c3bf4f290ad0ab7637fc24cc0eba8e744e4" exitCode=0 Jan 26 09:19:35 crc kubenswrapper[4827]: I0126 09:19:35.730485 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" event={"ID":"da2a9099-cc10-4968-887e-ca1d997b172c","Type":"ContainerDied","Data":"3dee9fde78c58de36e2a657a5c813c3bf4f290ad0ab7637fc24cc0eba8e744e4"} Jan 26 09:19:35 crc kubenswrapper[4827]: I0126 09:19:35.745352 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cnfxn" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" containerID="cri-o://aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25" gracePeriod=15 Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.149919 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cnfxn_ec0fa073-2bf5-49f4-aa07-1e3c34066f5a/console/0.log" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.150271 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.282828 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phxxw\" (UniqueName: \"kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.282877 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.282938 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.282952 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.282985 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.283017 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.283033 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config\") pod \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\" (UID: \"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a\") " Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284002 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca" (OuterVolumeSpecName: "service-ca") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284020 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284042 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config" (OuterVolumeSpecName: "console-config") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284073 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284341 4827 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284355 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284364 4827 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.284371 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.289696 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.290032 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw" (OuterVolumeSpecName: "kube-api-access-phxxw") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "kube-api-access-phxxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.290091 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" (UID: "ec0fa073-2bf5-49f4-aa07-1e3c34066f5a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.385435 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phxxw\" (UniqueName: \"kubernetes.io/projected/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-kube-api-access-phxxw\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.385470 4827 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.385484 4827 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.743265 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cnfxn_ec0fa073-2bf5-49f4-aa07-1e3c34066f5a/console/0.log" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.743459 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cnfxn" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.743562 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cnfxn" event={"ID":"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a","Type":"ContainerDied","Data":"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25"} Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.743676 4827 scope.go:117] "RemoveContainer" containerID="aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.746465 4827 generic.go:334] "Generic (PLEG): container finished" podID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerID="aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25" exitCode=2 Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.746792 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cnfxn" event={"ID":"ec0fa073-2bf5-49f4-aa07-1e3c34066f5a","Type":"ContainerDied","Data":"8b62256b1d41a9ccb1a3425c852ba6cc81a17dfc5dd065bfa1d94edf1be6f957"} Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.788976 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.797954 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cnfxn"] Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.836720 4827 scope.go:117] "RemoveContainer" containerID="aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25" Jan 26 09:19:36 crc kubenswrapper[4827]: E0126 09:19:36.837166 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25\": container with ID starting with aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25 not found: ID does not exist" containerID="aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25" Jan 26 09:19:36 crc kubenswrapper[4827]: I0126 09:19:36.837189 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25"} err="failed to get container status \"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25\": rpc error: code = NotFound desc = could not find container \"aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25\": container with ID starting with aae73aecdabd354a374800702137944d7fd1e210e23447d8f23c97c93c655b25 not found: ID does not exist" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.052832 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.194390 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util\") pod \"da2a9099-cc10-4968-887e-ca1d997b172c\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.194567 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfmml\" (UniqueName: \"kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml\") pod \"da2a9099-cc10-4968-887e-ca1d997b172c\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.194628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle\") pod \"da2a9099-cc10-4968-887e-ca1d997b172c\" (UID: \"da2a9099-cc10-4968-887e-ca1d997b172c\") " Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.196048 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle" (OuterVolumeSpecName: "bundle") pod "da2a9099-cc10-4968-887e-ca1d997b172c" (UID: "da2a9099-cc10-4968-887e-ca1d997b172c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.200093 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml" (OuterVolumeSpecName: "kube-api-access-dfmml") pod "da2a9099-cc10-4968-887e-ca1d997b172c" (UID: "da2a9099-cc10-4968-887e-ca1d997b172c"). InnerVolumeSpecName "kube-api-access-dfmml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.208696 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util" (OuterVolumeSpecName: "util") pod "da2a9099-cc10-4968-887e-ca1d997b172c" (UID: "da2a9099-cc10-4968-887e-ca1d997b172c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.297027 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.297112 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da2a9099-cc10-4968-887e-ca1d997b172c-util\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.297131 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfmml\" (UniqueName: \"kubernetes.io/projected/da2a9099-cc10-4968-887e-ca1d997b172c-kube-api-access-dfmml\") on node \"crc\" DevicePath \"\"" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.717350 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" path="/var/lib/kubelet/pods/ec0fa073-2bf5-49f4-aa07-1e3c34066f5a/volumes" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.756822 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" event={"ID":"da2a9099-cc10-4968-887e-ca1d997b172c","Type":"ContainerDied","Data":"7be8c6ac9690845a2fd702f28edea337f98a61902e096eb6df6b80efa42067b2"} Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.756912 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k" Jan 26 09:19:37 crc kubenswrapper[4827]: I0126 09:19:37.756932 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be8c6ac9690845a2fd702f28edea337f98a61902e096eb6df6b80efa42067b2" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.486346 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b"] Jan 26 09:19:46 crc kubenswrapper[4827]: E0126 09:19:46.487088 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="extract" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487101 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="extract" Jan 26 09:19:46 crc kubenswrapper[4827]: E0126 09:19:46.487112 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="util" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487118 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="util" Jan 26 09:19:46 crc kubenswrapper[4827]: E0126 09:19:46.487129 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="pull" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487135 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="pull" Jan 26 09:19:46 crc kubenswrapper[4827]: E0126 09:19:46.487155 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487160 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487251 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0fa073-2bf5-49f4-aa07-1e3c34066f5a" containerName="console" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487265 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2a9099-cc10-4968-887e-ca1d997b172c" containerName="extract" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.487597 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.494350 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-f2wwv" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.494488 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.494610 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.494784 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.494887 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.558417 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b"] Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.601873 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-apiservice-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.601929 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-webhook-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.601974 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqwnl\" (UniqueName: \"kubernetes.io/projected/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-kube-api-access-qqwnl\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.703119 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-apiservice-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.703171 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-webhook-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.703214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqwnl\" (UniqueName: \"kubernetes.io/projected/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-kube-api-access-qqwnl\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.709462 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-apiservice-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.727577 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqwnl\" (UniqueName: \"kubernetes.io/projected/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-kube-api-access-qqwnl\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.734190 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/538f5ce6-87b2-41eb-ad3a-92d274c88dbb-webhook-cert\") pod \"metallb-operator-controller-manager-d6b7f6684-4h68b\" (UID: \"538f5ce6-87b2-41eb-ad3a-92d274c88dbb\") " pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:46 crc kubenswrapper[4827]: I0126 09:19:46.801796 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.011767 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h"] Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.012391 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.015435 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fzw45" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.016560 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.018626 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.040630 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h"] Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.107229 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fk5\" (UniqueName: \"kubernetes.io/projected/619a25aa-4152-45b1-b27e-b1dd154b5738-kube-api-access-d8fk5\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.107596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-apiservice-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.107705 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-webhook-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.208449 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8fk5\" (UniqueName: \"kubernetes.io/projected/619a25aa-4152-45b1-b27e-b1dd154b5738-kube-api-access-d8fk5\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.208826 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-apiservice-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.209529 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-webhook-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.215418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-webhook-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.215429 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/619a25aa-4152-45b1-b27e-b1dd154b5738-apiservice-cert\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.234361 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8fk5\" (UniqueName: \"kubernetes.io/projected/619a25aa-4152-45b1-b27e-b1dd154b5738-kube-api-access-d8fk5\") pod \"metallb-operator-webhook-server-5b856d8997-9lj9h\" (UID: \"619a25aa-4152-45b1-b27e-b1dd154b5738\") " pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.272438 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b"] Jan 26 09:19:47 crc kubenswrapper[4827]: W0126 09:19:47.280051 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538f5ce6_87b2_41eb_ad3a_92d274c88dbb.slice/crio-9e67660eea47ebc47dfa4ca8a0439f25c4056846bcd55d28feea70b22ba45394 WatchSource:0}: Error finding container 9e67660eea47ebc47dfa4ca8a0439f25c4056846bcd55d28feea70b22ba45394: Status 404 returned error can't find the container with id 9e67660eea47ebc47dfa4ca8a0439f25c4056846bcd55d28feea70b22ba45394 Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.326111 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.574165 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h"] Jan 26 09:19:47 crc kubenswrapper[4827]: W0126 09:19:47.579574 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod619a25aa_4152_45b1_b27e_b1dd154b5738.slice/crio-64f0a20516c76a1b0a12bca913bbc5bdde08c877153c41bb32972aaf47131a05 WatchSource:0}: Error finding container 64f0a20516c76a1b0a12bca913bbc5bdde08c877153c41bb32972aaf47131a05: Status 404 returned error can't find the container with id 64f0a20516c76a1b0a12bca913bbc5bdde08c877153c41bb32972aaf47131a05 Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.832139 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" event={"ID":"538f5ce6-87b2-41eb-ad3a-92d274c88dbb","Type":"ContainerStarted","Data":"9e67660eea47ebc47dfa4ca8a0439f25c4056846bcd55d28feea70b22ba45394"} Jan 26 09:19:47 crc kubenswrapper[4827]: I0126 09:19:47.833474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" event={"ID":"619a25aa-4152-45b1-b27e-b1dd154b5738","Type":"ContainerStarted","Data":"64f0a20516c76a1b0a12bca913bbc5bdde08c877153c41bb32972aaf47131a05"} Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.866841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" event={"ID":"538f5ce6-87b2-41eb-ad3a-92d274c88dbb","Type":"ContainerStarted","Data":"084815e324e69e6d86264018e5bdc4d59a40f5c3578139e98fc78fb502322ebb"} Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.867479 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.869414 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" event={"ID":"619a25aa-4152-45b1-b27e-b1dd154b5738","Type":"ContainerStarted","Data":"a589af263ebb236ea02877b3335eccd18bd6f16ad822e90d9a040d09b81bf224"} Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.869595 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.895577 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" podStartSLOduration=2.290545115 podStartE2EDuration="6.895557831s" podCreationTimestamp="2026-01-26 09:19:46 +0000 UTC" firstStartedPulling="2026-01-26 09:19:47.282677712 +0000 UTC m=+815.931349531" lastFinishedPulling="2026-01-26 09:19:51.887690428 +0000 UTC m=+820.536362247" observedRunningTime="2026-01-26 09:19:52.889822218 +0000 UTC m=+821.538494037" watchObservedRunningTime="2026-01-26 09:19:52.895557831 +0000 UTC m=+821.544229650" Jan 26 09:19:52 crc kubenswrapper[4827]: I0126 09:19:52.924133 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" podStartSLOduration=2.5679856819999998 podStartE2EDuration="6.924109142s" podCreationTimestamp="2026-01-26 09:19:46 +0000 UTC" firstStartedPulling="2026-01-26 09:19:47.582021962 +0000 UTC m=+816.230693781" lastFinishedPulling="2026-01-26 09:19:51.938145422 +0000 UTC m=+820.586817241" observedRunningTime="2026-01-26 09:19:52.913705306 +0000 UTC m=+821.562377125" watchObservedRunningTime="2026-01-26 09:19:52.924109142 +0000 UTC m=+821.572780971" Jan 26 09:20:07 crc kubenswrapper[4827]: I0126 09:20:07.332758 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b856d8997-9lj9h" Jan 26 09:20:26 crc kubenswrapper[4827]: I0126 09:20:26.805257 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-d6b7f6684-4h68b" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.693839 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z5mhg"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.696372 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.698587 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.698623 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9f9b9" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.699531 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.712581 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.713332 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.717594 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.730024 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.743866 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-reloader\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.743930 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.743956 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-conf\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744025 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvvp\" (UniqueName: \"kubernetes.io/projected/80d0ec40-8d37-43f1-93c8-8c970fba7072-kube-api-access-dgvvp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744049 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-startup\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744071 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-sockets\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744133 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics-certs\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744148 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9th\" (UniqueName: \"kubernetes.io/projected/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-kube-api-access-bn9th\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.744164 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.831202 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9rcbb"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.832122 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9rcbb" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.834236 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.834630 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tfwgn" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.835535 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.842970 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zxxx8"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.843989 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.844887 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-conf\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.844928 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvvp\" (UniqueName: \"kubernetes.io/projected/80d0ec40-8d37-43f1-93c8-8c970fba7072-kube-api-access-dgvvp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.844967 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-startup\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.844988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-sockets\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics-certs\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9th\" (UniqueName: \"kubernetes.io/projected/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-kube-api-access-bn9th\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845059 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845104 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-reloader\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845468 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845572 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-sockets\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.845869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-conf\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.846931 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-frr-startup\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: E0126 09:20:27.847264 4827 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.847311 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 09:20:27 crc kubenswrapper[4827]: E0126 09:20:27.847321 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert podName:80d0ec40-8d37-43f1-93c8-8c970fba7072 nodeName:}" failed. No retries permitted until 2026-01-26 09:20:28.34730332 +0000 UTC m=+856.995975139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert") pod "frr-k8s-webhook-server-7df86c4f6c-8pczr" (UID: "80d0ec40-8d37-43f1-93c8-8c970fba7072") : secret "frr-k8s-webhook-server-cert" not found Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.847703 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.847798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-reloader\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.859893 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zxxx8"] Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.861333 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-metrics-certs\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.868857 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvvp\" (UniqueName: \"kubernetes.io/projected/80d0ec40-8d37-43f1-93c8-8c970fba7072-kube-api-access-dgvvp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.881873 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9th\" (UniqueName: \"kubernetes.io/projected/9d3cf333-fbf3-4b54-9f9b-a01cf98b9792-kube-api-access-bn9th\") pod \"frr-k8s-z5mhg\" (UID: \"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792\") " pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946158 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-metrics-certs\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946175 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfr8h\" (UniqueName: \"kubernetes.io/projected/1f700d11-ba3a-4c81-8c29-237825f56448-kube-api-access-nfr8h\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946323 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6mj\" (UniqueName: \"kubernetes.io/projected/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-kube-api-access-5q6mj\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946392 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-cert\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:27 crc kubenswrapper[4827]: I0126 09:20:27.946472 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1f700d11-ba3a-4c81-8c29-237825f56448-metallb-excludel2\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.012523 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047498 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-metrics-certs\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047519 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfr8h\" (UniqueName: \"kubernetes.io/projected/1f700d11-ba3a-4c81-8c29-237825f56448-kube-api-access-nfr8h\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047554 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6mj\" (UniqueName: \"kubernetes.io/projected/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-kube-api-access-5q6mj\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047580 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-cert\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047598 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.047625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1f700d11-ba3a-4c81-8c29-237825f56448-metallb-excludel2\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.048281 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1f700d11-ba3a-4c81-8c29-237825f56448-metallb-excludel2\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.047311 4827 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.048346 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist podName:1f700d11-ba3a-4c81-8c29-237825f56448 nodeName:}" failed. No retries permitted until 2026-01-26 09:20:28.548335045 +0000 UTC m=+857.197006864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist") pod "speaker-9rcbb" (UID: "1f700d11-ba3a-4c81-8c29-237825f56448") : secret "metallb-memberlist" not found Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.048992 4827 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.049031 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs podName:1f700d11-ba3a-4c81-8c29-237825f56448 nodeName:}" failed. No retries permitted until 2026-01-26 09:20:28.549020984 +0000 UTC m=+857.197692803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs") pod "speaker-9rcbb" (UID: "1f700d11-ba3a-4c81-8c29-237825f56448") : secret "speaker-certs-secret" not found Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.054932 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.055761 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-metrics-certs\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.064261 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-cert\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.068179 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfr8h\" (UniqueName: \"kubernetes.io/projected/1f700d11-ba3a-4c81-8c29-237825f56448-kube-api-access-nfr8h\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.070814 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6mj\" (UniqueName: \"kubernetes.io/projected/0014db8e-0b1a-460c-b64e-bae6cdf0aaf0-kube-api-access-5q6mj\") pod \"controller-6968d8fdc4-zxxx8\" (UID: \"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0\") " pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.212255 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.352350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.357361 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80d0ec40-8d37-43f1-93c8-8c970fba7072-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8pczr\" (UID: \"80d0ec40-8d37-43f1-93c8-8c970fba7072\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.424305 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zxxx8"] Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.553908 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.553975 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.554633 4827 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 09:20:28 crc kubenswrapper[4827]: E0126 09:20:28.554763 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist podName:1f700d11-ba3a-4c81-8c29-237825f56448 nodeName:}" failed. No retries permitted until 2026-01-26 09:20:29.554735432 +0000 UTC m=+858.203407271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist") pod "speaker-9rcbb" (UID: "1f700d11-ba3a-4c81-8c29-237825f56448") : secret "metallb-memberlist" not found Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.557561 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-metrics-certs\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.625725 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:28 crc kubenswrapper[4827]: I0126 09:20:28.853148 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr"] Jan 26 09:20:28 crc kubenswrapper[4827]: W0126 09:20:28.862766 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d0ec40_8d37_43f1_93c8_8c970fba7072.slice/crio-f6bd6c6dbfa2f26d2d5f3986c3c7dd500e85321379ce73bee0916b5f760049fa WatchSource:0}: Error finding container f6bd6c6dbfa2f26d2d5f3986c3c7dd500e85321379ce73bee0916b5f760049fa: Status 404 returned error can't find the container with id f6bd6c6dbfa2f26d2d5f3986c3c7dd500e85321379ce73bee0916b5f760049fa Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.076689 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"51d38f47f44c8fc5feb481a489c99ba9fb794e0e59549d5f69ea31b4400b5022"} Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.078019 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" event={"ID":"80d0ec40-8d37-43f1-93c8-8c970fba7072","Type":"ContainerStarted","Data":"f6bd6c6dbfa2f26d2d5f3986c3c7dd500e85321379ce73bee0916b5f760049fa"} Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.079027 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zxxx8" event={"ID":"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0","Type":"ContainerStarted","Data":"2933ecadd7a0ad6c62f221d9e6302c4d5cf91ac322042e2048658b1bd0517356"} Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.079049 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zxxx8" event={"ID":"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0","Type":"ContainerStarted","Data":"336787e97dcc3297e6bd04087960d7d36a8ad3722b43f27495a57386df02aa76"} Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.079058 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zxxx8" event={"ID":"0014db8e-0b1a-460c-b64e-bae6cdf0aaf0","Type":"ContainerStarted","Data":"a5cf3eb06e1153d82a4e259c96822c2d58cefe7fe23a6e54b79b294c5de6efd2"} Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.079763 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.131156 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zxxx8" podStartSLOduration=2.131139818 podStartE2EDuration="2.131139818s" podCreationTimestamp="2026-01-26 09:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:20:29.127836304 +0000 UTC m=+857.776508123" watchObservedRunningTime="2026-01-26 09:20:29.131139818 +0000 UTC m=+857.779811637" Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.568058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.579235 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1f700d11-ba3a-4c81-8c29-237825f56448-memberlist\") pod \"speaker-9rcbb\" (UID: \"1f700d11-ba3a-4c81-8c29-237825f56448\") " pod="metallb-system/speaker-9rcbb" Jan 26 09:20:29 crc kubenswrapper[4827]: I0126 09:20:29.644966 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9rcbb" Jan 26 09:20:29 crc kubenswrapper[4827]: W0126 09:20:29.667026 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f700d11_ba3a_4c81_8c29_237825f56448.slice/crio-a6c61099de39a85d699a16d65edcc12abb58efbe0c9a6a1fab4c338f4fc35f8e WatchSource:0}: Error finding container a6c61099de39a85d699a16d65edcc12abb58efbe0c9a6a1fab4c338f4fc35f8e: Status 404 returned error can't find the container with id a6c61099de39a85d699a16d65edcc12abb58efbe0c9a6a1fab4c338f4fc35f8e Jan 26 09:20:30 crc kubenswrapper[4827]: I0126 09:20:30.115145 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rcbb" event={"ID":"1f700d11-ba3a-4c81-8c29-237825f56448","Type":"ContainerStarted","Data":"7cef89cfa524279113c9d0fd690807bf3e761086a45acdf5c4d79c7092ce252c"} Jan 26 09:20:30 crc kubenswrapper[4827]: I0126 09:20:30.115206 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rcbb" event={"ID":"1f700d11-ba3a-4c81-8c29-237825f56448","Type":"ContainerStarted","Data":"a6c61099de39a85d699a16d65edcc12abb58efbe0c9a6a1fab4c338f4fc35f8e"} Jan 26 09:20:31 crc kubenswrapper[4827]: I0126 09:20:31.130337 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rcbb" event={"ID":"1f700d11-ba3a-4c81-8c29-237825f56448","Type":"ContainerStarted","Data":"1756da9dad604a8f820350d531ffbf56e2276e6c95b29b529fb89d7862a3ba3f"} Jan 26 09:20:31 crc kubenswrapper[4827]: I0126 09:20:31.130600 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9rcbb" Jan 26 09:20:31 crc kubenswrapper[4827]: I0126 09:20:31.727486 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9rcbb" podStartSLOduration=4.7274714079999995 podStartE2EDuration="4.727471408s" podCreationTimestamp="2026-01-26 09:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:20:31.158073151 +0000 UTC m=+859.806744970" watchObservedRunningTime="2026-01-26 09:20:31.727471408 +0000 UTC m=+860.376143217" Jan 26 09:20:37 crc kubenswrapper[4827]: I0126 09:20:37.165276 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d3cf333-fbf3-4b54-9f9b-a01cf98b9792" containerID="acc1ad8260db0a8b806643ecd984a0a0031b38787b9d035d93bc273325205963" exitCode=0 Jan 26 09:20:37 crc kubenswrapper[4827]: I0126 09:20:37.165465 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerDied","Data":"acc1ad8260db0a8b806643ecd984a0a0031b38787b9d035d93bc273325205963"} Jan 26 09:20:37 crc kubenswrapper[4827]: I0126 09:20:37.168274 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" event={"ID":"80d0ec40-8d37-43f1-93c8-8c970fba7072","Type":"ContainerStarted","Data":"bc23f93f88e1ab1a2cf346a7ad860c3c6a0b0e18a907935b956aca2f60ba22b3"} Jan 26 09:20:37 crc kubenswrapper[4827]: I0126 09:20:37.168764 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:37 crc kubenswrapper[4827]: I0126 09:20:37.234391 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" podStartSLOduration=2.469015354 podStartE2EDuration="10.233919521s" podCreationTimestamp="2026-01-26 09:20:27 +0000 UTC" firstStartedPulling="2026-01-26 09:20:28.868063099 +0000 UTC m=+857.516734918" lastFinishedPulling="2026-01-26 09:20:36.632967246 +0000 UTC m=+865.281639085" observedRunningTime="2026-01-26 09:20:37.226746786 +0000 UTC m=+865.875418615" watchObservedRunningTime="2026-01-26 09:20:37.233919521 +0000 UTC m=+865.882591340" Jan 26 09:20:38 crc kubenswrapper[4827]: I0126 09:20:38.177282 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d3cf333-fbf3-4b54-9f9b-a01cf98b9792" containerID="d6247644d9ee7f629c7575add8e8ac9037c549ed60b8a5d531f6a014362a2273" exitCode=0 Jan 26 09:20:38 crc kubenswrapper[4827]: I0126 09:20:38.177328 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerDied","Data":"d6247644d9ee7f629c7575add8e8ac9037c549ed60b8a5d531f6a014362a2273"} Jan 26 09:20:38 crc kubenswrapper[4827]: I0126 09:20:38.215583 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zxxx8" Jan 26 09:20:39 crc kubenswrapper[4827]: I0126 09:20:39.184513 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d3cf333-fbf3-4b54-9f9b-a01cf98b9792" containerID="40e9c71606dfbfe7f6231444e28df65b55a4161eb686ac902f2a3c6490b3b135" exitCode=0 Jan 26 09:20:39 crc kubenswrapper[4827]: I0126 09:20:39.184705 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerDied","Data":"40e9c71606dfbfe7f6231444e28df65b55a4161eb686ac902f2a3c6490b3b135"} Jan 26 09:20:39 crc kubenswrapper[4827]: I0126 09:20:39.648282 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9rcbb" Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.195348 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"c25d4b6bba0e428150e13d8d085f130ff0cd8319ad962356a764b2daeac4d74c"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"ec660b94c77b358488c15f9c46d56652d12cd368e53d917afe7db29fbf219a95"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196495 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"ff243fc8cbc8f59be018d5800af02a12da82adaaa6d10dbdbbea2c8523e26960"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196580 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"8b8045af4fa3a5fab02e21bab846a355179550e7d809a42c9ab253aba559ee79"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196695 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196717 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"fbe21a6162a427135dcc85d5bbb76dde21eccd0325b253536e24b64e95201094"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.196745 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5mhg" event={"ID":"9d3cf333-fbf3-4b54-9f9b-a01cf98b9792","Type":"ContainerStarted","Data":"b50dff1556ba3aba686381d85a3e1feabd92d2940c03ef7c60787e66b78b12af"} Jan 26 09:20:40 crc kubenswrapper[4827]: I0126 09:20:40.214854 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z5mhg" podStartSLOduration=5.43596939 podStartE2EDuration="13.214835405s" podCreationTimestamp="2026-01-26 09:20:27 +0000 UTC" firstStartedPulling="2026-01-26 09:20:28.864210499 +0000 UTC m=+857.512882328" lastFinishedPulling="2026-01-26 09:20:36.643076514 +0000 UTC m=+865.291748343" observedRunningTime="2026-01-26 09:20:40.212600072 +0000 UTC m=+868.861271901" watchObservedRunningTime="2026-01-26 09:20:40.214835405 +0000 UTC m=+868.863507224" Jan 26 09:20:42 crc kubenswrapper[4827]: I0126 09:20:42.999490 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.000387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.003794 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.004093 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gjxxr" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.004442 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.013358 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.022255 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.048811 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xpf\" (UniqueName: \"kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf\") pod \"openstack-operator-index-8tqr9\" (UID: \"1d7adfea-94b5-4e74-bb5c-914e95771e0b\") " pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.104403 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.149603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xpf\" (UniqueName: \"kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf\") pod \"openstack-operator-index-8tqr9\" (UID: \"1d7adfea-94b5-4e74-bb5c-914e95771e0b\") " pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.179679 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xpf\" (UniqueName: \"kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf\") pod \"openstack-operator-index-8tqr9\" (UID: \"1d7adfea-94b5-4e74-bb5c-914e95771e0b\") " pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.319854 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:43 crc kubenswrapper[4827]: I0126 09:20:43.543806 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:43 crc kubenswrapper[4827]: W0126 09:20:43.559813 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7adfea_94b5_4e74_bb5c_914e95771e0b.slice/crio-f205bc2fee572933d18bca92d382c0b0246a4e31b9cc8536b42c5617d780db47 WatchSource:0}: Error finding container f205bc2fee572933d18bca92d382c0b0246a4e31b9cc8536b42c5617d780db47: Status 404 returned error can't find the container with id f205bc2fee572933d18bca92d382c0b0246a4e31b9cc8536b42c5617d780db47 Jan 26 09:20:44 crc kubenswrapper[4827]: I0126 09:20:44.219233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tqr9" event={"ID":"1d7adfea-94b5-4e74-bb5c-914e95771e0b","Type":"ContainerStarted","Data":"f205bc2fee572933d18bca92d382c0b0246a4e31b9cc8536b42c5617d780db47"} Jan 26 09:20:45 crc kubenswrapper[4827]: I0126 09:20:45.228300 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tqr9" event={"ID":"1d7adfea-94b5-4e74-bb5c-914e95771e0b","Type":"ContainerStarted","Data":"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74"} Jan 26 09:20:45 crc kubenswrapper[4827]: I0126 09:20:45.241034 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8tqr9" podStartSLOduration=2.307237356 podStartE2EDuration="3.241015774s" podCreationTimestamp="2026-01-26 09:20:42 +0000 UTC" firstStartedPulling="2026-01-26 09:20:43.560880259 +0000 UTC m=+872.209552078" lastFinishedPulling="2026-01-26 09:20:44.494658687 +0000 UTC m=+873.143330496" observedRunningTime="2026-01-26 09:20:45.238302577 +0000 UTC m=+873.886974406" watchObservedRunningTime="2026-01-26 09:20:45.241015774 +0000 UTC m=+873.889687593" Jan 26 09:20:46 crc kubenswrapper[4827]: I0126 09:20:46.388436 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:46 crc kubenswrapper[4827]: I0126 09:20:46.792489 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ttbws"] Jan 26 09:20:46 crc kubenswrapper[4827]: I0126 09:20:46.794823 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:46 crc kubenswrapper[4827]: I0126 09:20:46.807145 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ttbws"] Jan 26 09:20:46 crc kubenswrapper[4827]: I0126 09:20:46.939042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrszz\" (UniqueName: \"kubernetes.io/projected/e1ce3819-36a2-4cc6-9942-e8881815e42e-kube-api-access-nrszz\") pod \"openstack-operator-index-ttbws\" (UID: \"e1ce3819-36a2-4cc6-9942-e8881815e42e\") " pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:47 crc kubenswrapper[4827]: I0126 09:20:47.040040 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrszz\" (UniqueName: \"kubernetes.io/projected/e1ce3819-36a2-4cc6-9942-e8881815e42e-kube-api-access-nrszz\") pod \"openstack-operator-index-ttbws\" (UID: \"e1ce3819-36a2-4cc6-9942-e8881815e42e\") " pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:47 crc kubenswrapper[4827]: I0126 09:20:47.073244 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrszz\" (UniqueName: \"kubernetes.io/projected/e1ce3819-36a2-4cc6-9942-e8881815e42e-kube-api-access-nrszz\") pod \"openstack-operator-index-ttbws\" (UID: \"e1ce3819-36a2-4cc6-9942-e8881815e42e\") " pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:47 crc kubenswrapper[4827]: I0126 09:20:47.151336 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:47 crc kubenswrapper[4827]: I0126 09:20:47.245562 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8tqr9" podUID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" containerName="registry-server" containerID="cri-o://334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74" gracePeriod=2 Jan 26 09:20:47 crc kubenswrapper[4827]: I0126 09:20:47.374005 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ttbws"] Jan 26 09:20:47 crc kubenswrapper[4827]: W0126 09:20:47.388005 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1ce3819_36a2_4cc6_9942_e8881815e42e.slice/crio-6fef762bf57571924298a03617511bd39994de520f65f04d73670cd953573566 WatchSource:0}: Error finding container 6fef762bf57571924298a03617511bd39994de520f65f04d73670cd953573566: Status 404 returned error can't find the container with id 6fef762bf57571924298a03617511bd39994de520f65f04d73670cd953573566 Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.227944 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.252602 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ttbws" event={"ID":"e1ce3819-36a2-4cc6-9942-e8881815e42e","Type":"ContainerStarted","Data":"114bdd60af4a5b1e8ff392de1293f1ad5daf0ae8307463239627053334d53d58"} Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.252668 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ttbws" event={"ID":"e1ce3819-36a2-4cc6-9942-e8881815e42e","Type":"ContainerStarted","Data":"6fef762bf57571924298a03617511bd39994de520f65f04d73670cd953573566"} Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.254814 4827 generic.go:334] "Generic (PLEG): container finished" podID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" containerID="334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74" exitCode=0 Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.254844 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tqr9" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.254860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tqr9" event={"ID":"1d7adfea-94b5-4e74-bb5c-914e95771e0b","Type":"ContainerDied","Data":"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74"} Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.254903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tqr9" event={"ID":"1d7adfea-94b5-4e74-bb5c-914e95771e0b","Type":"ContainerDied","Data":"f205bc2fee572933d18bca92d382c0b0246a4e31b9cc8536b42c5617d780db47"} Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.254921 4827 scope.go:117] "RemoveContainer" containerID="334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.275060 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ttbws" podStartSLOduration=1.702520243 podStartE2EDuration="2.275040479s" podCreationTimestamp="2026-01-26 09:20:46 +0000 UTC" firstStartedPulling="2026-01-26 09:20:47.391982435 +0000 UTC m=+876.040654254" lastFinishedPulling="2026-01-26 09:20:47.964502661 +0000 UTC m=+876.613174490" observedRunningTime="2026-01-26 09:20:48.268085332 +0000 UTC m=+876.916757151" watchObservedRunningTime="2026-01-26 09:20:48.275040479 +0000 UTC m=+876.923712298" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.285927 4827 scope.go:117] "RemoveContainer" containerID="334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74" Jan 26 09:20:48 crc kubenswrapper[4827]: E0126 09:20:48.286429 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74\": container with ID starting with 334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74 not found: ID does not exist" containerID="334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.286484 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74"} err="failed to get container status \"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74\": rpc error: code = NotFound desc = could not find container \"334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74\": container with ID starting with 334aed0184c9fd0155921d09c26c5bc6ae17b72ee258337e59b872a86fe7ef74 not found: ID does not exist" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.359521 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48xpf\" (UniqueName: \"kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf\") pod \"1d7adfea-94b5-4e74-bb5c-914e95771e0b\" (UID: \"1d7adfea-94b5-4e74-bb5c-914e95771e0b\") " Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.365901 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf" (OuterVolumeSpecName: "kube-api-access-48xpf") pod "1d7adfea-94b5-4e74-bb5c-914e95771e0b" (UID: "1d7adfea-94b5-4e74-bb5c-914e95771e0b"). InnerVolumeSpecName "kube-api-access-48xpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.461721 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48xpf\" (UniqueName: \"kubernetes.io/projected/1d7adfea-94b5-4e74-bb5c-914e95771e0b-kube-api-access-48xpf\") on node \"crc\" DevicePath \"\"" Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.599389 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.604265 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8tqr9"] Jan 26 09:20:48 crc kubenswrapper[4827]: I0126 09:20:48.631944 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8pczr" Jan 26 09:20:49 crc kubenswrapper[4827]: I0126 09:20:49.711533 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" path="/var/lib/kubelet/pods/1d7adfea-94b5-4e74-bb5c-914e95771e0b/volumes" Jan 26 09:20:57 crc kubenswrapper[4827]: I0126 09:20:57.151975 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:57 crc kubenswrapper[4827]: I0126 09:20:57.152468 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:57 crc kubenswrapper[4827]: I0126 09:20:57.230899 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:57 crc kubenswrapper[4827]: I0126 09:20:57.357140 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ttbws" Jan 26 09:20:58 crc kubenswrapper[4827]: I0126 09:20:58.018277 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z5mhg" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.830864 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs"] Jan 26 09:21:02 crc kubenswrapper[4827]: E0126 09:21:02.831468 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" containerName="registry-server" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.831484 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" containerName="registry-server" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.831611 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7adfea-94b5-4e74-bb5c-914e95771e0b" containerName="registry-server" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.832883 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.840138 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs"] Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.844470 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m9t4m" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.855221 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.855276 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7bj\" (UniqueName: \"kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.855298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.956278 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.956341 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx7bj\" (UniqueName: \"kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.956366 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.956908 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.956950 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:02 crc kubenswrapper[4827]: I0126 09:21:02.983433 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx7bj\" (UniqueName: \"kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj\") pod \"35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:03 crc kubenswrapper[4827]: I0126 09:21:03.148999 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:03 crc kubenswrapper[4827]: I0126 09:21:03.386117 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs"] Jan 26 09:21:03 crc kubenswrapper[4827]: W0126 09:21:03.387207 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod386c499d_ff53_4460_a37a_60cd7a42f922.slice/crio-360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299 WatchSource:0}: Error finding container 360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299: Status 404 returned error can't find the container with id 360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299 Jan 26 09:21:04 crc kubenswrapper[4827]: I0126 09:21:04.391898 4827 generic.go:334] "Generic (PLEG): container finished" podID="386c499d-ff53-4460-a37a-60cd7a42f922" containerID="2a4f02fb6d996be68e4830ad110923bfd5db373fb1297abbf80431472437bfa9" exitCode=0 Jan 26 09:21:04 crc kubenswrapper[4827]: I0126 09:21:04.391946 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" event={"ID":"386c499d-ff53-4460-a37a-60cd7a42f922","Type":"ContainerDied","Data":"2a4f02fb6d996be68e4830ad110923bfd5db373fb1297abbf80431472437bfa9"} Jan 26 09:21:04 crc kubenswrapper[4827]: I0126 09:21:04.391975 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" event={"ID":"386c499d-ff53-4460-a37a-60cd7a42f922","Type":"ContainerStarted","Data":"360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299"} Jan 26 09:21:05 crc kubenswrapper[4827]: I0126 09:21:05.399869 4827 generic.go:334] "Generic (PLEG): container finished" podID="386c499d-ff53-4460-a37a-60cd7a42f922" containerID="0d96155f045c0d90b7d118ccd78726a90256de308abf674711ecdc075ba399e7" exitCode=0 Jan 26 09:21:05 crc kubenswrapper[4827]: I0126 09:21:05.400067 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" event={"ID":"386c499d-ff53-4460-a37a-60cd7a42f922","Type":"ContainerDied","Data":"0d96155f045c0d90b7d118ccd78726a90256de308abf674711ecdc075ba399e7"} Jan 26 09:21:06 crc kubenswrapper[4827]: I0126 09:21:06.410950 4827 generic.go:334] "Generic (PLEG): container finished" podID="386c499d-ff53-4460-a37a-60cd7a42f922" containerID="877181ab07ba51dc2e2021bc6d3937e2a331d4b2d773398de70e2c6818b13a5a" exitCode=0 Jan 26 09:21:06 crc kubenswrapper[4827]: I0126 09:21:06.411014 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" event={"ID":"386c499d-ff53-4460-a37a-60cd7a42f922","Type":"ContainerDied","Data":"877181ab07ba51dc2e2021bc6d3937e2a331d4b2d773398de70e2c6818b13a5a"} Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.671864 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.718250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx7bj\" (UniqueName: \"kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj\") pod \"386c499d-ff53-4460-a37a-60cd7a42f922\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.718408 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle\") pod \"386c499d-ff53-4460-a37a-60cd7a42f922\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.718468 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util\") pod \"386c499d-ff53-4460-a37a-60cd7a42f922\" (UID: \"386c499d-ff53-4460-a37a-60cd7a42f922\") " Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.719420 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle" (OuterVolumeSpecName: "bundle") pod "386c499d-ff53-4460-a37a-60cd7a42f922" (UID: "386c499d-ff53-4460-a37a-60cd7a42f922"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.719557 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.724498 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj" (OuterVolumeSpecName: "kube-api-access-fx7bj") pod "386c499d-ff53-4460-a37a-60cd7a42f922" (UID: "386c499d-ff53-4460-a37a-60cd7a42f922"). InnerVolumeSpecName "kube-api-access-fx7bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.734523 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util" (OuterVolumeSpecName: "util") pod "386c499d-ff53-4460-a37a-60cd7a42f922" (UID: "386c499d-ff53-4460-a37a-60cd7a42f922"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.820833 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/386c499d-ff53-4460-a37a-60cd7a42f922-util\") on node \"crc\" DevicePath \"\"" Jan 26 09:21:07 crc kubenswrapper[4827]: I0126 09:21:07.820897 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx7bj\" (UniqueName: \"kubernetes.io/projected/386c499d-ff53-4460-a37a-60cd7a42f922-kube-api-access-fx7bj\") on node \"crc\" DevicePath \"\"" Jan 26 09:21:08 crc kubenswrapper[4827]: I0126 09:21:08.429911 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" event={"ID":"386c499d-ff53-4460-a37a-60cd7a42f922","Type":"ContainerDied","Data":"360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299"} Jan 26 09:21:08 crc kubenswrapper[4827]: I0126 09:21:08.429955 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360c41b58eb6dc0c59b34df5969fae442021eef356ed56ef83df550540aac299" Jan 26 09:21:08 crc kubenswrapper[4827]: I0126 09:21:08.429990 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs" Jan 26 09:21:12 crc kubenswrapper[4827]: I0126 09:21:12.268631 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:21:12 crc kubenswrapper[4827]: I0126 09:21:12.269170 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.845560 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr"] Jan 26 09:21:14 crc kubenswrapper[4827]: E0126 09:21:14.846059 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="util" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.846075 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="util" Jan 26 09:21:14 crc kubenswrapper[4827]: E0126 09:21:14.846100 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="extract" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.846108 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="extract" Jan 26 09:21:14 crc kubenswrapper[4827]: E0126 09:21:14.846122 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="pull" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.846130 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="pull" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.846252 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="386c499d-ff53-4460-a37a-60cd7a42f922" containerName="extract" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.846611 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.849737 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5rcj5" Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.894339 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr"] Jan 26 09:21:14 crc kubenswrapper[4827]: I0126 09:21:14.932570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69n8\" (UniqueName: \"kubernetes.io/projected/62ec7e94-ac44-47b3-8a19-d0b443a135d4-kube-api-access-k69n8\") pod \"openstack-operator-controller-init-f6799c556-8bwdr\" (UID: \"62ec7e94-ac44-47b3-8a19-d0b443a135d4\") " pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:15 crc kubenswrapper[4827]: I0126 09:21:15.034345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69n8\" (UniqueName: \"kubernetes.io/projected/62ec7e94-ac44-47b3-8a19-d0b443a135d4-kube-api-access-k69n8\") pod \"openstack-operator-controller-init-f6799c556-8bwdr\" (UID: \"62ec7e94-ac44-47b3-8a19-d0b443a135d4\") " pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:15 crc kubenswrapper[4827]: I0126 09:21:15.054632 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69n8\" (UniqueName: \"kubernetes.io/projected/62ec7e94-ac44-47b3-8a19-d0b443a135d4-kube-api-access-k69n8\") pod \"openstack-operator-controller-init-f6799c556-8bwdr\" (UID: \"62ec7e94-ac44-47b3-8a19-d0b443a135d4\") " pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:15 crc kubenswrapper[4827]: I0126 09:21:15.161918 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:15 crc kubenswrapper[4827]: I0126 09:21:15.612496 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr"] Jan 26 09:21:15 crc kubenswrapper[4827]: W0126 09:21:15.614537 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ec7e94_ac44_47b3_8a19_d0b443a135d4.slice/crio-7c7eeb5e8b3f52a2c3a4693c27c66f225b0720d495fb0ea0a3d04d528568eb66 WatchSource:0}: Error finding container 7c7eeb5e8b3f52a2c3a4693c27c66f225b0720d495fb0ea0a3d04d528568eb66: Status 404 returned error can't find the container with id 7c7eeb5e8b3f52a2c3a4693c27c66f225b0720d495fb0ea0a3d04d528568eb66 Jan 26 09:21:16 crc kubenswrapper[4827]: I0126 09:21:16.481882 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" event={"ID":"62ec7e94-ac44-47b3-8a19-d0b443a135d4","Type":"ContainerStarted","Data":"7c7eeb5e8b3f52a2c3a4693c27c66f225b0720d495fb0ea0a3d04d528568eb66"} Jan 26 09:21:20 crc kubenswrapper[4827]: I0126 09:21:20.519278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" event={"ID":"62ec7e94-ac44-47b3-8a19-d0b443a135d4","Type":"ContainerStarted","Data":"a9bfaea629ee838f289842a348b3ed2ae96e7532dccd4584304794af0022fdaa"} Jan 26 09:21:20 crc kubenswrapper[4827]: I0126 09:21:20.519989 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:20 crc kubenswrapper[4827]: I0126 09:21:20.574257 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" podStartSLOduration=2.143975482 podStartE2EDuration="6.574237279s" podCreationTimestamp="2026-01-26 09:21:14 +0000 UTC" firstStartedPulling="2026-01-26 09:21:15.617540047 +0000 UTC m=+904.266211866" lastFinishedPulling="2026-01-26 09:21:20.047801834 +0000 UTC m=+908.696473663" observedRunningTime="2026-01-26 09:21:20.569810666 +0000 UTC m=+909.218482495" watchObservedRunningTime="2026-01-26 09:21:20.574237279 +0000 UTC m=+909.222909098" Jan 26 09:21:25 crc kubenswrapper[4827]: I0126 09:21:25.166062 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f6799c556-8bwdr" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.017665 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.019613 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.097329 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.177003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.177061 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.177120 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg6xd\" (UniqueName: \"kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.278377 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg6xd\" (UniqueName: \"kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.278466 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.278502 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.278921 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.278989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.307714 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg6xd\" (UniqueName: \"kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd\") pod \"redhat-marketplace-h2dq2\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.334486 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:40 crc kubenswrapper[4827]: I0126 09:21:40.678339 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:21:41 crc kubenswrapper[4827]: I0126 09:21:41.651032 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerStarted","Data":"979e431d1d304ad37a0eb163e096bd8deb60dc3b31666fd4ee0ba04485ff33ae"} Jan 26 09:21:42 crc kubenswrapper[4827]: I0126 09:21:42.269105 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:21:42 crc kubenswrapper[4827]: I0126 09:21:42.269178 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:21:42 crc kubenswrapper[4827]: I0126 09:21:42.656761 4827 generic.go:334] "Generic (PLEG): container finished" podID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerID="df02f5da162df790902b2c166fe9369f6a3aaeccf4ad7663752bda9758b78b7f" exitCode=0 Jan 26 09:21:42 crc kubenswrapper[4827]: I0126 09:21:42.656800 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerDied","Data":"df02f5da162df790902b2c166fe9369f6a3aaeccf4ad7663752bda9758b78b7f"} Jan 26 09:21:43 crc kubenswrapper[4827]: I0126 09:21:43.662898 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerStarted","Data":"44a120897bc0019da26df751ef1f5f115c85b7b29499f317fc135cbcf45e161e"} Jan 26 09:21:44 crc kubenswrapper[4827]: I0126 09:21:44.669598 4827 generic.go:334] "Generic (PLEG): container finished" podID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerID="44a120897bc0019da26df751ef1f5f115c85b7b29499f317fc135cbcf45e161e" exitCode=0 Jan 26 09:21:44 crc kubenswrapper[4827]: I0126 09:21:44.669693 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerDied","Data":"44a120897bc0019da26df751ef1f5f115c85b7b29499f317fc135cbcf45e161e"} Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.024071 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.025119 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.027567 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cd6q2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.036249 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.037018 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.043122 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c7wpw" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.055080 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.067539 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.074776 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.075676 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.084782 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.085938 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.089839 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fddlp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.090005 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cb5vd" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.103194 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcszj\" (UniqueName: \"kubernetes.io/projected/4b99eea5-fc5a-4441-8858-1a500c49c429-kube-api-access-hcszj\") pod \"barbican-operator-controller-manager-7f86f8796f-82zp4\" (UID: \"4b99eea5-fc5a-4441-8858-1a500c49c429\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.103244 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt74q\" (UniqueName: \"kubernetes.io/projected/571aa666-d430-47aa-a48b-91b5a2555723-kube-api-access-xt74q\") pod \"cinder-operator-controller-manager-7478f7dbf9-7d95c\" (UID: \"571aa666-d430-47aa-a48b-91b5a2555723\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.103279 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpplf\" (UniqueName: \"kubernetes.io/projected/3759f1d2-941a-496f-a51e-aa2bd6fbeeec-kube-api-access-gpplf\") pod \"glance-operator-controller-manager-78fdd796fd-w42nm\" (UID: \"3759f1d2-941a-496f-a51e-aa2bd6fbeeec\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.103343 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmlp\" (UniqueName: \"kubernetes.io/projected/90405ca9-cf52-4ad1-94b9-54aacb8e5708-kube-api-access-nhmlp\") pod \"designate-operator-controller-manager-b45d7bf98-g47s2\" (UID: \"90405ca9-cf52-4ad1-94b9-54aacb8e5708\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.123988 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.138809 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.139771 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.151375 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-knfbx" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.157268 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.175787 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.176618 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.182081 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pg2n8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.183706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.190830 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.195460 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.201751 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gd79r" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.201968 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204169 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btks\" (UniqueName: \"kubernetes.io/projected/64d1c33b-eace-4919-be5d-463f9621036a-kube-api-access-9btks\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204261 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmlp\" (UniqueName: \"kubernetes.io/projected/90405ca9-cf52-4ad1-94b9-54aacb8e5708-kube-api-access-nhmlp\") pod \"designate-operator-controller-manager-b45d7bf98-g47s2\" (UID: \"90405ca9-cf52-4ad1-94b9-54aacb8e5708\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204338 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcszj\" (UniqueName: \"kubernetes.io/projected/4b99eea5-fc5a-4441-8858-1a500c49c429-kube-api-access-hcszj\") pod \"barbican-operator-controller-manager-7f86f8796f-82zp4\" (UID: \"4b99eea5-fc5a-4441-8858-1a500c49c429\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204365 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt74q\" (UniqueName: \"kubernetes.io/projected/571aa666-d430-47aa-a48b-91b5a2555723-kube-api-access-xt74q\") pod \"cinder-operator-controller-manager-7478f7dbf9-7d95c\" (UID: \"571aa666-d430-47aa-a48b-91b5a2555723\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204395 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpplf\" (UniqueName: \"kubernetes.io/projected/3759f1d2-941a-496f-a51e-aa2bd6fbeeec-kube-api-access-gpplf\") pod \"glance-operator-controller-manager-78fdd796fd-w42nm\" (UID: \"3759f1d2-941a-496f-a51e-aa2bd6fbeeec\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204430 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85pd6\" (UniqueName: \"kubernetes.io/projected/86d77aba-3a0a-43d5-b592-2c45d866515c-kube-api-access-85pd6\") pod \"horizon-operator-controller-manager-77d5c5b54f-hj2q8\" (UID: \"86d77aba-3a0a-43d5-b592-2c45d866515c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204458 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2jx\" (UniqueName: \"kubernetes.io/projected/52992458-b4f0-409b-8be0-96a545a80839-kube-api-access-fp2jx\") pod \"heat-operator-controller-manager-594c8c9d5d-f4pjj\" (UID: \"52992458-b4f0-409b-8be0-96a545a80839\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.204490 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.220291 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.227046 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.227735 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.242077 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5kjff" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.253063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcszj\" (UniqueName: \"kubernetes.io/projected/4b99eea5-fc5a-4441-8858-1a500c49c429-kube-api-access-hcszj\") pod \"barbican-operator-controller-manager-7f86f8796f-82zp4\" (UID: \"4b99eea5-fc5a-4441-8858-1a500c49c429\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.253594 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmlp\" (UniqueName: \"kubernetes.io/projected/90405ca9-cf52-4ad1-94b9-54aacb8e5708-kube-api-access-nhmlp\") pod \"designate-operator-controller-manager-b45d7bf98-g47s2\" (UID: \"90405ca9-cf52-4ad1-94b9-54aacb8e5708\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.265991 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt74q\" (UniqueName: \"kubernetes.io/projected/571aa666-d430-47aa-a48b-91b5a2555723-kube-api-access-xt74q\") pod \"cinder-operator-controller-manager-7478f7dbf9-7d95c\" (UID: \"571aa666-d430-47aa-a48b-91b5a2555723\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.277026 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpplf\" (UniqueName: \"kubernetes.io/projected/3759f1d2-941a-496f-a51e-aa2bd6fbeeec-kube-api-access-gpplf\") pod \"glance-operator-controller-manager-78fdd796fd-w42nm\" (UID: \"3759f1d2-941a-496f-a51e-aa2bd6fbeeec\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.285218 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.285959 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.294907 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ntgk5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.301037 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.305403 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85pd6\" (UniqueName: \"kubernetes.io/projected/86d77aba-3a0a-43d5-b592-2c45d866515c-kube-api-access-85pd6\") pod \"horizon-operator-controller-manager-77d5c5b54f-hj2q8\" (UID: \"86d77aba-3a0a-43d5-b592-2c45d866515c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.305444 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2jx\" (UniqueName: \"kubernetes.io/projected/52992458-b4f0-409b-8be0-96a545a80839-kube-api-access-fp2jx\") pod \"heat-operator-controller-manager-594c8c9d5d-f4pjj\" (UID: \"52992458-b4f0-409b-8be0-96a545a80839\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.305472 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.305495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9btks\" (UniqueName: \"kubernetes.io/projected/64d1c33b-eace-4919-be5d-463f9621036a-kube-api-access-9btks\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.306420 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.306463 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:21:45.806449011 +0000 UTC m=+934.455120830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.322294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.339266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85pd6\" (UniqueName: \"kubernetes.io/projected/86d77aba-3a0a-43d5-b592-2c45d866515c-kube-api-access-85pd6\") pod \"horizon-operator-controller-manager-77d5c5b54f-hj2q8\" (UID: \"86d77aba-3a0a-43d5-b592-2c45d866515c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.341180 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.342114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.348125 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.363352 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zdp4k" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.364320 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.372865 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9btks\" (UniqueName: \"kubernetes.io/projected/64d1c33b-eace-4919-be5d-463f9621036a-kube-api-access-9btks\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.385290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2jx\" (UniqueName: \"kubernetes.io/projected/52992458-b4f0-409b-8be0-96a545a80839-kube-api-access-fp2jx\") pod \"heat-operator-controller-manager-594c8c9d5d-f4pjj\" (UID: \"52992458-b4f0-409b-8be0-96a545a80839\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.410802 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.411490 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bt4z\" (UniqueName: \"kubernetes.io/projected/7588c42e-08d0-4c2d-b62d-07fc7257cf8f-kube-api-access-7bt4z\") pod \"manila-operator-controller-manager-78c6999f6f-tmb5m\" (UID: \"7588c42e-08d0-4c2d-b62d-07fc7257cf8f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.411546 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762ft\" (UniqueName: \"kubernetes.io/projected/84b85200-c9f6-4759-bb84-1513165fe742-kube-api-access-762ft\") pod \"keystone-operator-controller-manager-b8b6d4659-ldvbb\" (UID: \"84b85200-c9f6-4759-bb84-1513165fe742\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.411585 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftx4\" (UniqueName: \"kubernetes.io/projected/9f1d37d2-59af-4a07-8d64-f1636eee3929-kube-api-access-bftx4\") pod \"ironic-operator-controller-manager-598f7747c9-96nv5\" (UID: \"9f1d37d2-59af-4a07-8d64-f1636eee3929\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.435715 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.465872 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.466905 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.503113 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.506686 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.507412 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.512099 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46vk7" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.512856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bt4z\" (UniqueName: \"kubernetes.io/projected/7588c42e-08d0-4c2d-b62d-07fc7257cf8f-kube-api-access-7bt4z\") pod \"manila-operator-controller-manager-78c6999f6f-tmb5m\" (UID: \"7588c42e-08d0-4c2d-b62d-07fc7257cf8f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.512906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762ft\" (UniqueName: \"kubernetes.io/projected/84b85200-c9f6-4759-bb84-1513165fe742-kube-api-access-762ft\") pod \"keystone-operator-controller-manager-b8b6d4659-ldvbb\" (UID: \"84b85200-c9f6-4759-bb84-1513165fe742\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.512929 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxl8n\" (UniqueName: \"kubernetes.io/projected/7fa19e2b-55c2-4e72-882a-eb4437b37c50-kube-api-access-sxl8n\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp\" (UID: \"7fa19e2b-55c2-4e72-882a-eb4437b37c50\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.512965 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bftx4\" (UniqueName: \"kubernetes.io/projected/9f1d37d2-59af-4a07-8d64-f1636eee3929-kube-api-access-bftx4\") pod \"ironic-operator-controller-manager-598f7747c9-96nv5\" (UID: \"9f1d37d2-59af-4a07-8d64-f1636eee3929\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.521978 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.541935 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.542733 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.554224 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.555019 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-b5f29" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.567299 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bt4z\" (UniqueName: \"kubernetes.io/projected/7588c42e-08d0-4c2d-b62d-07fc7257cf8f-kube-api-access-7bt4z\") pod \"manila-operator-controller-manager-78c6999f6f-tmb5m\" (UID: \"7588c42e-08d0-4c2d-b62d-07fc7257cf8f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.588490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762ft\" (UniqueName: \"kubernetes.io/projected/84b85200-c9f6-4759-bb84-1513165fe742-kube-api-access-762ft\") pod \"keystone-operator-controller-manager-b8b6d4659-ldvbb\" (UID: \"84b85200-c9f6-4759-bb84-1513165fe742\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.614482 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxl8n\" (UniqueName: \"kubernetes.io/projected/7fa19e2b-55c2-4e72-882a-eb4437b37c50-kube-api-access-sxl8n\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp\" (UID: \"7fa19e2b-55c2-4e72-882a-eb4437b37c50\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.644716 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.646429 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.646857 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bftx4\" (UniqueName: \"kubernetes.io/projected/9f1d37d2-59af-4a07-8d64-f1636eee3929-kube-api-access-bftx4\") pod \"ironic-operator-controller-manager-598f7747c9-96nv5\" (UID: \"9f1d37d2-59af-4a07-8d64-f1636eee3929\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.659013 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-brnkc" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.663519 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.730370 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.751603 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxl8n\" (UniqueName: \"kubernetes.io/projected/7fa19e2b-55c2-4e72-882a-eb4437b37c50-kube-api-access-sxl8n\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp\" (UID: \"7fa19e2b-55c2-4e72-882a-eb4437b37c50\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.760320 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.760361 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.761286 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjs58\" (UniqueName: \"kubernetes.io/projected/58431f1d-bbf1-459c-9f79-39c94712b9d7-kube-api-access-fjs58\") pod \"neutron-operator-controller-manager-78d58447c5-5tq7r\" (UID: \"58431f1d-bbf1-459c-9f79-39c94712b9d7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.761333 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtbnq\" (UniqueName: \"kubernetes.io/projected/e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4-kube-api-access-jtbnq\") pod \"nova-operator-controller-manager-7bdb645866-9g9tb\" (UID: \"e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.773920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerStarted","Data":"70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734"} Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.795376 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.804939 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.805335 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.809292 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rfn98" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.821837 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.822825 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.833220 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.835041 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p4rq7" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.841051 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.844403 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.845242 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.849977 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sg7pk" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.858011 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865039 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsh8\" (UniqueName: \"kubernetes.io/projected/c3b4b2f4-2b69-4c36-b967-27c70f7a5767-kube-api-access-nwsh8\") pod \"octavia-operator-controller-manager-5f4cd88d46-l4gjk\" (UID: \"c3b4b2f4-2b69-4c36-b967-27c70f7a5767\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865090 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865119 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865180 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjs58\" (UniqueName: \"kubernetes.io/projected/58431f1d-bbf1-459c-9f79-39c94712b9d7-kube-api-access-fjs58\") pod \"neutron-operator-controller-manager-78d58447c5-5tq7r\" (UID: \"58431f1d-bbf1-459c-9f79-39c94712b9d7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865212 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbnq\" (UniqueName: \"kubernetes.io/projected/e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4-kube-api-access-jtbnq\") pod \"nova-operator-controller-manager-7bdb645866-9g9tb\" (UID: \"e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865258 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7875\" (UniqueName: \"kubernetes.io/projected/87aea9ac-4117-4870-81a9-44adabc28383-kube-api-access-r7875\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.865389 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.865433 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:21:46.865411897 +0000 UTC m=+935.514083716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.865740 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.866666 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.869785 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gsz9w" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.887491 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.907405 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.910948 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbnq\" (UniqueName: \"kubernetes.io/projected/e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4-kube-api-access-jtbnq\") pod \"nova-operator-controller-manager-7bdb645866-9g9tb\" (UID: \"e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.919237 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.920062 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.921027 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.921341 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjs58\" (UniqueName: \"kubernetes.io/projected/58431f1d-bbf1-459c-9f79-39c94712b9d7-kube-api-access-fjs58\") pod \"neutron-operator-controller-manager-78d58447c5-5tq7r\" (UID: \"58431f1d-bbf1-459c-9f79-39c94712b9d7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.922754 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-frss9" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.937252 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.938864 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.942523 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6b5qn" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.955918 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970337 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtv5b\" (UniqueName: \"kubernetes.io/projected/1cb20984-f7df-4d0b-9434-86182d952bb1-kube-api-access-wtv5b\") pod \"telemetry-operator-controller-manager-85cd9769bb-9qw4q\" (UID: \"1cb20984-f7df-4d0b-9434-86182d952bb1\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970383 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7875\" (UniqueName: \"kubernetes.io/projected/87aea9ac-4117-4870-81a9-44adabc28383-kube-api-access-r7875\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970417 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwsh8\" (UniqueName: \"kubernetes.io/projected/c3b4b2f4-2b69-4c36-b967-27c70f7a5767-kube-api-access-nwsh8\") pod \"octavia-operator-controller-manager-5f4cd88d46-l4gjk\" (UID: \"c3b4b2f4-2b69-4c36-b967-27c70f7a5767\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970499 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdzvn\" (UniqueName: \"kubernetes.io/projected/12001a2b-7c86-41a4-ba17-a0d586aea6e5-kube-api-access-sdzvn\") pod \"swift-operator-controller-manager-547cbdb99f-fcj6p\" (UID: \"12001a2b-7c86-41a4-ba17-a0d586aea6e5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970531 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk5wc\" (UniqueName: \"kubernetes.io/projected/565c65e3-ea09-4057-81de-381377042c19-kube-api-access-qk5wc\") pod \"ovn-operator-controller-manager-6f75f45d54-vq7vj\" (UID: \"565c65e3-ea09-4057-81de-381377042c19\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.970556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xm6\" (UniqueName: \"kubernetes.io/projected/424c27d6-31d7-4a37-a7ef-c89099773070-kube-api-access-64xm6\") pod \"placement-operator-controller-manager-79d5ccc684-rzc28\" (UID: \"424c27d6-31d7-4a37-a7ef-c89099773070\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.970833 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: E0126 09:21:45.970880 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:46.470864076 +0000 UTC m=+935.119535895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.978609 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z"] Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.979970 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:21:45 crc kubenswrapper[4827]: I0126 09:21:45.989997 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hnvcl" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.003696 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7875\" (UniqueName: \"kubernetes.io/projected/87aea9ac-4117-4870-81a9-44adabc28383-kube-api-access-r7875\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.010307 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwsh8\" (UniqueName: \"kubernetes.io/projected/c3b4b2f4-2b69-4c36-b967-27c70f7a5767-kube-api-access-nwsh8\") pod \"octavia-operator-controller-manager-5f4cd88d46-l4gjk\" (UID: \"c3b4b2f4-2b69-4c36-b967-27c70f7a5767\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.038192 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.055796 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.071765 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdzvn\" (UniqueName: \"kubernetes.io/projected/12001a2b-7c86-41a4-ba17-a0d586aea6e5-kube-api-access-sdzvn\") pod \"swift-operator-controller-manager-547cbdb99f-fcj6p\" (UID: \"12001a2b-7c86-41a4-ba17-a0d586aea6e5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.071825 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk5wc\" (UniqueName: \"kubernetes.io/projected/565c65e3-ea09-4057-81de-381377042c19-kube-api-access-qk5wc\") pod \"ovn-operator-controller-manager-6f75f45d54-vq7vj\" (UID: \"565c65e3-ea09-4057-81de-381377042c19\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.071855 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xm6\" (UniqueName: \"kubernetes.io/projected/424c27d6-31d7-4a37-a7ef-c89099773070-kube-api-access-64xm6\") pod \"placement-operator-controller-manager-79d5ccc684-rzc28\" (UID: \"424c27d6-31d7-4a37-a7ef-c89099773070\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.071938 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtv5b\" (UniqueName: \"kubernetes.io/projected/1cb20984-f7df-4d0b-9434-86182d952bb1-kube-api-access-wtv5b\") pod \"telemetry-operator-controller-manager-85cd9769bb-9qw4q\" (UID: \"1cb20984-f7df-4d0b-9434-86182d952bb1\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.071972 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7vdt\" (UniqueName: \"kubernetes.io/projected/2e2bf61f-063e-4fa4-aa92-6c14ee83fc66-kube-api-access-h7vdt\") pod \"test-operator-controller-manager-69797bbcbd-cb96z\" (UID: \"2e2bf61f-063e-4fa4-aa92-6c14ee83fc66\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.088206 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.105045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtv5b\" (UniqueName: \"kubernetes.io/projected/1cb20984-f7df-4d0b-9434-86182d952bb1-kube-api-access-wtv5b\") pod \"telemetry-operator-controller-manager-85cd9769bb-9qw4q\" (UID: \"1cb20984-f7df-4d0b-9434-86182d952bb1\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.117261 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mbt6s"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.117835 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdzvn\" (UniqueName: \"kubernetes.io/projected/12001a2b-7c86-41a4-ba17-a0d586aea6e5-kube-api-access-sdzvn\") pod \"swift-operator-controller-manager-547cbdb99f-fcj6p\" (UID: \"12001a2b-7c86-41a4-ba17-a0d586aea6e5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.118405 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.121214 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xm6\" (UniqueName: \"kubernetes.io/projected/424c27d6-31d7-4a37-a7ef-c89099773070-kube-api-access-64xm6\") pod \"placement-operator-controller-manager-79d5ccc684-rzc28\" (UID: \"424c27d6-31d7-4a37-a7ef-c89099773070\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.146612 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bkt7z" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.163413 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk5wc\" (UniqueName: \"kubernetes.io/projected/565c65e3-ea09-4057-81de-381377042c19-kube-api-access-qk5wc\") pod \"ovn-operator-controller-manager-6f75f45d54-vq7vj\" (UID: \"565c65e3-ea09-4057-81de-381377042c19\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.173363 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxfd\" (UniqueName: \"kubernetes.io/projected/eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b-kube-api-access-2nxfd\") pod \"watcher-operator-controller-manager-564965969-mbt6s\" (UID: \"eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.173468 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7vdt\" (UniqueName: \"kubernetes.io/projected/2e2bf61f-063e-4fa4-aa92-6c14ee83fc66-kube-api-access-h7vdt\") pod \"test-operator-controller-manager-69797bbcbd-cb96z\" (UID: \"2e2bf61f-063e-4fa4-aa92-6c14ee83fc66\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.187769 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mbt6s"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.189983 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.204929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7vdt\" (UniqueName: \"kubernetes.io/projected/2e2bf61f-063e-4fa4-aa92-6c14ee83fc66-kube-api-access-h7vdt\") pod \"test-operator-controller-manager-69797bbcbd-cb96z\" (UID: \"2e2bf61f-063e-4fa4-aa92-6c14ee83fc66\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.205542 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2dq2" podStartSLOduration=4.813228144 podStartE2EDuration="7.205516993s" podCreationTimestamp="2026-01-26 09:21:39 +0000 UTC" firstStartedPulling="2026-01-26 09:21:42.658102259 +0000 UTC m=+931.306774078" lastFinishedPulling="2026-01-26 09:21:45.050391108 +0000 UTC m=+933.699062927" observedRunningTime="2026-01-26 09:21:45.807468627 +0000 UTC m=+934.456140446" watchObservedRunningTime="2026-01-26 09:21:46.205516993 +0000 UTC m=+934.854188812" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.231949 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.275218 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxfd\" (UniqueName: \"kubernetes.io/projected/eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b-kube-api-access-2nxfd\") pod \"watcher-operator-controller-manager-564965969-mbt6s\" (UID: \"eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.275988 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.297341 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.299183 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxfd\" (UniqueName: \"kubernetes.io/projected/eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b-kube-api-access-2nxfd\") pod \"watcher-operator-controller-manager-564965969-mbt6s\" (UID: \"eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.320857 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.327423 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.338483 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.339914 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.357906 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.367174 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.367443 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2hbrc" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.367557 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.402296 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.441185 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.441243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpmzm\" (UniqueName: \"kubernetes.io/projected/8ba78edc-c408-4071-ac8f-432e12ebb708-kube-api-access-rpmzm\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.441336 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.473759 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm"] Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.544598 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.554169 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.554724 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.555034 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:47.555013301 +0000 UTC m=+936.203685120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.555131 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.555713 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.555750 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:47.055740921 +0000 UTC m=+935.704412740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.555632 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.556126 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpmzm\" (UniqueName: \"kubernetes.io/projected/8ba78edc-c408-4071-ac8f-432e12ebb708-kube-api-access-rpmzm\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.556371 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.556453 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.556482 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:47.056474272 +0000 UTC m=+935.705146091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.570313 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wtx7w" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.577751 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.603888 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpmzm\" (UniqueName: \"kubernetes.io/projected/8ba78edc-c408-4071-ac8f-432e12ebb708-kube-api-access-rpmzm\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.658636 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gq7\" (UniqueName: \"kubernetes.io/projected/7eea6dea-82a0-4c66-a5a0-0b7d11878264-kube-api-access-d2gq7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-58bb7\" (UID: \"7eea6dea-82a0-4c66-a5a0-0b7d11878264\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.761507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gq7\" (UniqueName: \"kubernetes.io/projected/7eea6dea-82a0-4c66-a5a0-0b7d11878264-kube-api-access-d2gq7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-58bb7\" (UID: \"7eea6dea-82a0-4c66-a5a0-0b7d11878264\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.767329 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.782886 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.829861 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gq7\" (UniqueName: \"kubernetes.io/projected/7eea6dea-82a0-4c66-a5a0-0b7d11878264-kube-api-access-d2gq7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-58bb7\" (UID: \"7eea6dea-82a0-4c66-a5a0-0b7d11878264\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.837411 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.838725 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.876594 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.876808 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: E0126 09:21:46.876850 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:21:48.876835691 +0000 UTC m=+937.525507510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.903441 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.927881 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.979816 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v2d4\" (UniqueName: \"kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.979889 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.979915 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:46 crc kubenswrapper[4827]: I0126 09:21:46.993288 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.019981 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.027236 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.060710 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.093600 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.095564 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.108101 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v2d4\" (UniqueName: \"kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.108193 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:48.108142426 +0000 UTC m=+936.756814245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.108211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.108268 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.108301 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.108667 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.108701 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:48.108688171 +0000 UTC m=+936.757359990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.108845 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.109047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.133980 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.169792 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v2d4\" (UniqueName: \"kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4\") pod \"certified-operators-r9m8c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.184079 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb"] Jan 26 09:21:47 crc kubenswrapper[4827]: W0126 09:21:47.190838 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52992458_b4f0_409b_8be0_96a545a80839.slice/crio-32db94459258d7bd66520592cf94aa38238db3b354b0fdbcbf6d86ddb82c96e0 WatchSource:0}: Error finding container 32db94459258d7bd66520592cf94aa38238db3b354b0fdbcbf6d86ddb82c96e0: Status 404 returned error can't find the container with id 32db94459258d7bd66520592cf94aa38238db3b354b0fdbcbf6d86ddb82c96e0 Jan 26 09:21:47 crc kubenswrapper[4827]: W0126 09:21:47.198255 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d77aba_3a0a_43d5_b592_2c45d866515c.slice/crio-ff5124f1baeb2e3b0c77d8e30324437a243dcc09f3dd25bd75fdc1e7072a1e95 WatchSource:0}: Error finding container ff5124f1baeb2e3b0c77d8e30324437a243dcc09f3dd25bd75fdc1e7072a1e95: Status 404 returned error can't find the container with id ff5124f1baeb2e3b0c77d8e30324437a243dcc09f3dd25bd75fdc1e7072a1e95 Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.220994 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.256945 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:21:47 crc kubenswrapper[4827]: W0126 09:21:47.309571 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1d37d2_59af_4a07_8d64_f1636eee3929.slice/crio-b6f8e9b5295b82d9dd8bf88d93f0218f6462cbca55451a9791ecf4e698bbd89b WatchSource:0}: Error finding container b6f8e9b5295b82d9dd8bf88d93f0218f6462cbca55451a9791ecf4e698bbd89b: Status 404 returned error can't find the container with id b6f8e9b5295b82d9dd8bf88d93f0218f6462cbca55451a9791ecf4e698bbd89b Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.621311 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.621828 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: E0126 09:21:47.621882 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:49.621864805 +0000 UTC m=+938.270536624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.800921 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.832294 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" event={"ID":"9f1d37d2-59af-4a07-8d64-f1636eee3929","Type":"ContainerStarted","Data":"b6f8e9b5295b82d9dd8bf88d93f0218f6462cbca55451a9791ecf4e698bbd89b"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.864375 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.873333 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" event={"ID":"3759f1d2-941a-496f-a51e-aa2bd6fbeeec","Type":"ContainerStarted","Data":"b6086dbf5f0852af06b4df7db46c08fbfff9241f663fe61967dbe3f3f3702dc0"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.874684 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" event={"ID":"7588c42e-08d0-4c2d-b62d-07fc7257cf8f","Type":"ContainerStarted","Data":"8d5634446674ac2282e8b5594135c8fa9cde59121add63ab9d1f5837a512d881"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.875441 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.892398 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" event={"ID":"86d77aba-3a0a-43d5-b592-2c45d866515c","Type":"ContainerStarted","Data":"ff5124f1baeb2e3b0c77d8e30324437a243dcc09f3dd25bd75fdc1e7072a1e95"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.898167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" event={"ID":"52992458-b4f0-409b-8be0-96a545a80839","Type":"ContainerStarted","Data":"32db94459258d7bd66520592cf94aa38238db3b354b0fdbcbf6d86ddb82c96e0"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.902886 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" event={"ID":"571aa666-d430-47aa-a48b-91b5a2555723","Type":"ContainerStarted","Data":"3807dec05a9d0640c5022bf4d36b16d8d016fe55d0af57d529f758793a52e5af"} Jan 26 09:21:47 crc kubenswrapper[4827]: W0126 09:21:47.913844 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb20984_f7df_4d0b_9434_86182d952bb1.slice/crio-1ab32e375fab1c028db15282b5c2b72783745534d116849de8d38161e6973b2c WatchSource:0}: Error finding container 1ab32e375fab1c028db15282b5c2b72783745534d116849de8d38161e6973b2c: Status 404 returned error can't find the container with id 1ab32e375fab1c028db15282b5c2b72783745534d116849de8d38161e6973b2c Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.922875 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" event={"ID":"4b99eea5-fc5a-4441-8858-1a500c49c429","Type":"ContainerStarted","Data":"a39278cac2663aa0339907c9d70f1deee2095410fa1dd8f0eee9e9ff1613eaf1"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.938258 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" event={"ID":"84b85200-c9f6-4759-bb84-1513165fe742","Type":"ContainerStarted","Data":"366ceb9e786cd19f472fdea0bb2b273ce31553b697ee362ac646342efed1ff2a"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.940749 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" event={"ID":"90405ca9-cf52-4ad1-94b9-54aacb8e5708","Type":"ContainerStarted","Data":"5fa28755af5baffe23400edf0c15ceb7d16b2e1910cd4c3f529d0ec512aa3b87"} Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.941196 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp"] Jan 26 09:21:47 crc kubenswrapper[4827]: I0126 09:21:47.960747 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28"] Jan 26 09:21:48 crc kubenswrapper[4827]: W0126 09:21:48.008157 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424c27d6_31d7_4a37_a7ef_c89099773070.slice/crio-d7e3c14e1d2a7aabada39178785211559e084c1b95fbc7969f4a63e878ceadb3 WatchSource:0}: Error finding container d7e3c14e1d2a7aabada39178785211559e084c1b95fbc7969f4a63e878ceadb3: Status 404 returned error can't find the container with id d7e3c14e1d2a7aabada39178785211559e084c1b95fbc7969f4a63e878ceadb3 Jan 26 09:21:48 crc kubenswrapper[4827]: W0126 09:21:48.009654 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12001a2b_7c86_41a4_ba17_a0d586aea6e5.slice/crio-a26e14485aa669fb7609d2e0cce201fe44e4019bbf13f4a7e3ea19815e08d229 WatchSource:0}: Error finding container a26e14485aa669fb7609d2e0cce201fe44e4019bbf13f4a7e3ea19815e08d229: Status 404 returned error can't find the container with id a26e14485aa669fb7609d2e0cce201fe44e4019bbf13f4a7e3ea19815e08d229 Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.016137 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj"] Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.029179 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z"] Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.047573 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p"] Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.051974 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mbt6s"] Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.103918 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h7vdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-cb96z_openstack-operators(2e2bf61f-063e-4fa4-aa92-6c14ee83fc66): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.105747 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" podUID="2e2bf61f-063e-4fa4-aa92-6c14ee83fc66" Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.116826 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qk5wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-vq7vj_openstack-operators(565c65e3-ea09-4057-81de-381377042c19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.120799 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podUID="565c65e3-ea09-4057-81de-381377042c19" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.130871 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.130943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.131072 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.131116 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:50.13110261 +0000 UTC m=+938.779774429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.131157 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.131178 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:50.131170922 +0000 UTC m=+938.779842741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.195989 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7"] Jan 26 09:21:48 crc kubenswrapper[4827]: W0126 09:21:48.243196 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eea6dea_82a0_4c66_a5a0_0b7d11878264.slice/crio-d6f88e10791e27aa521742001425739d17b0367df1c6f92877a17ed3ad0ecdbb WatchSource:0}: Error finding container d6f88e10791e27aa521742001425739d17b0367df1c6f92877a17ed3ad0ecdbb: Status 404 returned error can't find the container with id d6f88e10791e27aa521742001425739d17b0367df1c6f92877a17ed3ad0ecdbb Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.262280 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2gq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-58bb7_openstack-operators(7eea6dea-82a0-4c66-a5a0-0b7d11878264): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.262548 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.265867 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podUID="7eea6dea-82a0-4c66-a5a0-0b7d11878264" Jan 26 09:21:48 crc kubenswrapper[4827]: W0126 09:21:48.288224 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b4b2f4_2b69_4c36_b967_27c70f7a5767.slice/crio-a0882b37360e23e06c1ffe63cdbe7ac0d39a5b3eca2a10f9180a99cb1b2ce1b7 WatchSource:0}: Error finding container a0882b37360e23e06c1ffe63cdbe7ac0d39a5b3eca2a10f9180a99cb1b2ce1b7: Status 404 returned error can't find the container with id a0882b37360e23e06c1ffe63cdbe7ac0d39a5b3eca2a10f9180a99cb1b2ce1b7 Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.319695 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk"] Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.959760 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" event={"ID":"1cb20984-f7df-4d0b-9434-86182d952bb1","Type":"ContainerStarted","Data":"1ab32e375fab1c028db15282b5c2b72783745534d116849de8d38161e6973b2c"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.961600 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" event={"ID":"12001a2b-7c86-41a4-ba17-a0d586aea6e5","Type":"ContainerStarted","Data":"a26e14485aa669fb7609d2e0cce201fe44e4019bbf13f4a7e3ea19815e08d229"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.962955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.963126 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.963199 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:21:52.963180132 +0000 UTC m=+941.611851951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.964325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" event={"ID":"565c65e3-ea09-4057-81de-381377042c19","Type":"ContainerStarted","Data":"9c193547078840a895dd2e995c74bd69fb05c2493e868439499367f439239fc3"} Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.967375 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podUID="565c65e3-ea09-4057-81de-381377042c19" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.968075 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" event={"ID":"e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4","Type":"ContainerStarted","Data":"e0c5c2333a0448bb0d1bf32f2eecf64a6665b9d00704462845d0f84217038019"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.971060 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" event={"ID":"2e2bf61f-063e-4fa4-aa92-6c14ee83fc66","Type":"ContainerStarted","Data":"d0bbb245e2a8356670d8df14e684277ed8476e92599885aab28851ebcd796a19"} Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.972069 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" podUID="2e2bf61f-063e-4fa4-aa92-6c14ee83fc66" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.973054 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" event={"ID":"7eea6dea-82a0-4c66-a5a0-0b7d11878264","Type":"ContainerStarted","Data":"d6f88e10791e27aa521742001425739d17b0367df1c6f92877a17ed3ad0ecdbb"} Jan 26 09:21:48 crc kubenswrapper[4827]: E0126 09:21:48.976057 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podUID="7eea6dea-82a0-4c66-a5a0-0b7d11878264" Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.988211 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7507e2f-81c3-496e-a03d-5117836c520c" containerID="99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e" exitCode=0 Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.988540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerDied","Data":"99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.988589 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerStarted","Data":"82de115ac2d6eab7ff2b7ecca6f6b606e97b1dc4c15637df0c25d38875119231"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.994053 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" event={"ID":"7fa19e2b-55c2-4e72-882a-eb4437b37c50","Type":"ContainerStarted","Data":"956f3a99d6c52d8ad4f5404535c45e6f69e29dd87022690fdad6375590d6fea9"} Jan 26 09:21:48 crc kubenswrapper[4827]: I0126 09:21:48.995708 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" event={"ID":"58431f1d-bbf1-459c-9f79-39c94712b9d7","Type":"ContainerStarted","Data":"b55f2c8e48b28c6002d30c10cd33ed7bb4f619c8f8bb7cc3ae114bce8d5e4d06"} Jan 26 09:21:49 crc kubenswrapper[4827]: I0126 09:21:49.027218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" event={"ID":"eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b","Type":"ContainerStarted","Data":"d4397b91c961af804657395b19533311eaff2306e7d7be57e5cfc718909dfddf"} Jan 26 09:21:49 crc kubenswrapper[4827]: I0126 09:21:49.033021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" event={"ID":"c3b4b2f4-2b69-4c36-b967-27c70f7a5767","Type":"ContainerStarted","Data":"a0882b37360e23e06c1ffe63cdbe7ac0d39a5b3eca2a10f9180a99cb1b2ce1b7"} Jan 26 09:21:49 crc kubenswrapper[4827]: I0126 09:21:49.039903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" event={"ID":"424c27d6-31d7-4a37-a7ef-c89099773070","Type":"ContainerStarted","Data":"d7e3c14e1d2a7aabada39178785211559e084c1b95fbc7969f4a63e878ceadb3"} Jan 26 09:21:49 crc kubenswrapper[4827]: I0126 09:21:49.717711 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:49 crc kubenswrapper[4827]: E0126 09:21:49.717887 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:49 crc kubenswrapper[4827]: E0126 09:21:49.718037 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:53.717929776 +0000 UTC m=+942.366601595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.105619 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podUID="565c65e3-ea09-4057-81de-381377042c19" Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.105793 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podUID="7eea6dea-82a0-4c66-a5a0-0b7d11878264" Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.105812 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" podUID="2e2bf61f-063e-4fa4-aa92-6c14ee83fc66" Jan 26 09:21:50 crc kubenswrapper[4827]: I0126 09:21:50.229898 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:50 crc kubenswrapper[4827]: I0126 09:21:50.230536 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.230770 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.230841 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.230885 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:54.230844933 +0000 UTC m=+942.879516832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:21:50 crc kubenswrapper[4827]: E0126 09:21:50.230910 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:21:54.230898364 +0000 UTC m=+942.879570293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:21:50 crc kubenswrapper[4827]: I0126 09:21:50.334932 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:50 crc kubenswrapper[4827]: I0126 09:21:50.334980 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:50 crc kubenswrapper[4827]: I0126 09:21:50.530964 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:51 crc kubenswrapper[4827]: I0126 09:21:51.092565 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7507e2f-81c3-496e-a03d-5117836c520c" containerID="e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857" exitCode=0 Jan 26 09:21:51 crc kubenswrapper[4827]: I0126 09:21:51.092666 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerDied","Data":"e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857"} Jan 26 09:21:51 crc kubenswrapper[4827]: I0126 09:21:51.146367 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:21:52 crc kubenswrapper[4827]: I0126 09:21:52.772569 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:21:52 crc kubenswrapper[4827]: I0126 09:21:52.981937 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:21:52 crc kubenswrapper[4827]: E0126 09:21:52.982149 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:52 crc kubenswrapper[4827]: E0126 09:21:52.982263 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:22:00.982234207 +0000 UTC m=+949.630906036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:21:53 crc kubenswrapper[4827]: I0126 09:21:53.104898 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2dq2" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" containerID="cri-o://70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" gracePeriod=2 Jan 26 09:21:53 crc kubenswrapper[4827]: I0126 09:21:53.719761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:21:53 crc kubenswrapper[4827]: E0126 09:21:53.719934 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:53 crc kubenswrapper[4827]: E0126 09:21:53.719982 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:01.719968529 +0000 UTC m=+950.368640338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:21:54 crc kubenswrapper[4827]: I0126 09:21:54.124474 4827 generic.go:334] "Generic (PLEG): container finished" podID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" exitCode=0 Jan 26 09:21:54 crc kubenswrapper[4827]: I0126 09:21:54.124526 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerDied","Data":"70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734"} Jan 26 09:21:54 crc kubenswrapper[4827]: I0126 09:21:54.326930 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:54 crc kubenswrapper[4827]: I0126 09:21:54.327013 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:21:54 crc kubenswrapper[4827]: E0126 09:21:54.327172 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:21:54 crc kubenswrapper[4827]: E0126 09:21:54.327238 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:02.327219916 +0000 UTC m=+950.975891755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:21:54 crc kubenswrapper[4827]: E0126 09:21:54.327484 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:21:54 crc kubenswrapper[4827]: E0126 09:21:54.327688 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:02.327665498 +0000 UTC m=+950.976337327 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:22:00 crc kubenswrapper[4827]: E0126 09:22:00.335902 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:00 crc kubenswrapper[4827]: E0126 09:22:00.336823 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:00 crc kubenswrapper[4827]: E0126 09:22:00.337407 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:00 crc kubenswrapper[4827]: E0126 09:22:00.337512 4827 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-h2dq2" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" Jan 26 09:22:01 crc kubenswrapper[4827]: I0126 09:22:01.027569 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.027750 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.027805 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert podName:64d1c33b-eace-4919-be5d-463f9621036a nodeName:}" failed. No retries permitted until 2026-01-26 09:22:17.027785874 +0000 UTC m=+965.676457703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert") pod "infra-operator-controller-manager-694cf4f878-skgxf" (UID: "64d1c33b-eace-4919-be5d-463f9621036a") : secret "infra-operator-webhook-server-cert" not found Jan 26 09:22:01 crc kubenswrapper[4827]: I0126 09:22:01.737700 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.737913 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.737995 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert podName:87aea9ac-4117-4870-81a9-44adabc28383 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:17.73797096 +0000 UTC m=+966.386642819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert") pod "openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" (UID: "87aea9ac-4117-4870-81a9-44adabc28383") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.869613 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.870164 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjs58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-5tq7r_openstack-operators(58431f1d-bbf1-459c-9f79-39c94712b9d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:01 crc kubenswrapper[4827]: E0126 09:22:01.871913 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" podUID="58431f1d-bbf1-459c-9f79-39c94712b9d7" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.180917 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" podUID="58431f1d-bbf1-459c-9f79-39c94712b9d7" Jan 26 09:22:02 crc kubenswrapper[4827]: I0126 09:22:02.346770 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:02 crc kubenswrapper[4827]: I0126 09:22:02.346918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.346923 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.346976 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:18.346959005 +0000 UTC m=+966.995630824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "webhook-server-cert" not found Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.347007 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.347046 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs podName:8ba78edc-c408-4071-ac8f-432e12ebb708 nodeName:}" failed. No retries permitted until 2026-01-26 09:22:18.347034537 +0000 UTC m=+966.995706356 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs") pod "openstack-operator-controller-manager-65d46cfd44-jsnhm" (UID: "8ba78edc-c408-4071-ac8f-432e12ebb708") : secret "metrics-server-cert" not found Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.427951 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.428132 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7bt4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-tmb5m_openstack-operators(7588c42e-08d0-4c2d-b62d-07fc7257cf8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.430126 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" podUID="7588c42e-08d0-4c2d-b62d-07fc7257cf8f" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.983278 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.983491 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64xm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-rzc28_openstack-operators(424c27d6-31d7-4a37-a7ef-c89099773070): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:02 crc kubenswrapper[4827]: E0126 09:22:02.984736 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" podUID="424c27d6-31d7-4a37-a7ef-c89099773070" Jan 26 09:22:03 crc kubenswrapper[4827]: E0126 09:22:03.185995 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" podUID="7588c42e-08d0-4c2d-b62d-07fc7257cf8f" Jan 26 09:22:03 crc kubenswrapper[4827]: E0126 09:22:03.186652 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" podUID="424c27d6-31d7-4a37-a7ef-c89099773070" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.133874 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.134416 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp_openstack-operators(7fa19e2b-55c2-4e72-882a-eb4437b37c50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.136058 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" podUID="7fa19e2b-55c2-4e72-882a-eb4437b37c50" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.198113 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" podUID="7fa19e2b-55c2-4e72-882a-eb4437b37c50" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.663027 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.663193 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcszj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-82zp4_openstack-operators(4b99eea5-fc5a-4441-8858-1a500c49c429): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:05 crc kubenswrapper[4827]: E0126 09:22:05.664525 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" podUID="4b99eea5-fc5a-4441-8858-1a500c49c429" Jan 26 09:22:06 crc kubenswrapper[4827]: E0126 09:22:06.208482 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" podUID="4b99eea5-fc5a-4441-8858-1a500c49c429" Jan 26 09:22:07 crc kubenswrapper[4827]: E0126 09:22:07.025910 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 09:22:07 crc kubenswrapper[4827]: E0126 09:22:07.026116 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bftx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-96nv5_openstack-operators(9f1d37d2-59af-4a07-8d64-f1636eee3929): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:07 crc kubenswrapper[4827]: E0126 09:22:07.027280 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" podUID="9f1d37d2-59af-4a07-8d64-f1636eee3929" Jan 26 09:22:07 crc kubenswrapper[4827]: E0126 09:22:07.213054 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" podUID="9f1d37d2-59af-4a07-8d64-f1636eee3929" Jan 26 09:22:08 crc kubenswrapper[4827]: E0126 09:22:08.639797 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 09:22:08 crc kubenswrapper[4827]: E0126 09:22:08.640542 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpplf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-w42nm_openstack-operators(3759f1d2-941a-496f-a51e-aa2bd6fbeeec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:08 crc kubenswrapper[4827]: E0126 09:22:08.642520 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" podUID="3759f1d2-941a-496f-a51e-aa2bd6fbeeec" Jan 26 09:22:09 crc kubenswrapper[4827]: E0126 09:22:09.231017 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" podUID="3759f1d2-941a-496f-a51e-aa2bd6fbeeec" Jan 26 09:22:09 crc kubenswrapper[4827]: E0126 09:22:09.944028 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 09:22:09 crc kubenswrapper[4827]: E0126 09:22:09.945369 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp2jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-f4pjj_openstack-operators(52992458-b4f0-409b-8be0-96a545a80839): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:09 crc kubenswrapper[4827]: E0126 09:22:09.947251 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" podUID="52992458-b4f0-409b-8be0-96a545a80839" Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.235421 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" podUID="52992458-b4f0-409b-8be0-96a545a80839" Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.335371 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.335894 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.336407 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.336446 4827 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-h2dq2" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.500521 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.500771 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdzvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-fcj6p_openstack-operators(12001a2b-7c86-41a4-ba17-a0d586aea6e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:10 crc kubenswrapper[4827]: E0126 09:22:10.502472 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" podUID="12001a2b-7c86-41a4-ba17-a0d586aea6e5" Jan 26 09:22:11 crc kubenswrapper[4827]: E0126 09:22:11.241332 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" podUID="12001a2b-7c86-41a4-ba17-a0d586aea6e5" Jan 26 09:22:12 crc kubenswrapper[4827]: I0126 09:22:12.269316 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:22:12 crc kubenswrapper[4827]: I0126 09:22:12.269741 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:22:12 crc kubenswrapper[4827]: I0126 09:22:12.269800 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:22:12 crc kubenswrapper[4827]: I0126 09:22:12.270726 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:22:12 crc kubenswrapper[4827]: I0126 09:22:12.270831 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e" gracePeriod=600 Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.449576 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.449780 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwsh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-l4gjk_openstack-operators(c3b4b2f4-2b69-4c36-b967-27c70f7a5767): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.450866 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" podUID="c3b4b2f4-2b69-4c36-b967-27c70f7a5767" Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.905189 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.905367 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85pd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-hj2q8_openstack-operators(86d77aba-3a0a-43d5-b592-2c45d866515c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:12 crc kubenswrapper[4827]: E0126 09:22:12.906551 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" podUID="86d77aba-3a0a-43d5-b592-2c45d866515c" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.256843 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e" exitCode=0 Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.256919 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e"} Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.256968 4827 scope.go:117] "RemoveContainer" containerID="09984f15fc0db03533138db7cb3e03cb670316bfaa38b7a153d49d31b2be85ca" Jan 26 09:22:13 crc kubenswrapper[4827]: E0126 09:22:13.258542 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" podUID="86d77aba-3a0a-43d5-b592-2c45d866515c" Jan 26 09:22:13 crc kubenswrapper[4827]: E0126 09:22:13.259413 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" podUID="c3b4b2f4-2b69-4c36-b967-27c70f7a5767" Jan 26 09:22:13 crc kubenswrapper[4827]: E0126 09:22:13.500627 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 09:22:13 crc kubenswrapper[4827]: E0126 09:22:13.500857 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhmlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-g47s2_openstack-operators(90405ca9-cf52-4ad1-94b9-54aacb8e5708): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:13 crc kubenswrapper[4827]: E0126 09:22:13.504790 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" podUID="90405ca9-cf52-4ad1-94b9-54aacb8e5708" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.543252 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.627401 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content\") pod \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.627503 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities\") pod \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.627586 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg6xd\" (UniqueName: \"kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd\") pod \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\" (UID: \"85d0e0f7-5fb6-4aed-8b17-8a44107d703c\") " Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.628303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities" (OuterVolumeSpecName: "utilities") pod "85d0e0f7-5fb6-4aed-8b17-8a44107d703c" (UID: "85d0e0f7-5fb6-4aed-8b17-8a44107d703c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.639881 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd" (OuterVolumeSpecName: "kube-api-access-wg6xd") pod "85d0e0f7-5fb6-4aed-8b17-8a44107d703c" (UID: "85d0e0f7-5fb6-4aed-8b17-8a44107d703c"). InnerVolumeSpecName "kube-api-access-wg6xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.654119 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85d0e0f7-5fb6-4aed-8b17-8a44107d703c" (UID: "85d0e0f7-5fb6-4aed-8b17-8a44107d703c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.734131 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg6xd\" (UniqueName: \"kubernetes.io/projected/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-kube-api-access-wg6xd\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.734175 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:13 crc kubenswrapper[4827]: I0126 09:22:13.734197 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d0e0f7-5fb6-4aed-8b17-8a44107d703c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:14 crc kubenswrapper[4827]: E0126 09:22:14.114031 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 09:22:14 crc kubenswrapper[4827]: E0126 09:22:14.114217 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtbnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-9g9tb_openstack-operators(e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:14 crc kubenswrapper[4827]: E0126 09:22:14.115872 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" podUID="e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4" Jan 26 09:22:14 crc kubenswrapper[4827]: I0126 09:22:14.266185 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2dq2" event={"ID":"85d0e0f7-5fb6-4aed-8b17-8a44107d703c","Type":"ContainerDied","Data":"979e431d1d304ad37a0eb163e096bd8deb60dc3b31666fd4ee0ba04485ff33ae"} Jan 26 09:22:14 crc kubenswrapper[4827]: I0126 09:22:14.266259 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2dq2" Jan 26 09:22:14 crc kubenswrapper[4827]: E0126 09:22:14.267263 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" podUID="e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4" Jan 26 09:22:14 crc kubenswrapper[4827]: E0126 09:22:14.269803 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" podUID="90405ca9-cf52-4ad1-94b9-54aacb8e5708" Jan 26 09:22:14 crc kubenswrapper[4827]: I0126 09:22:14.328440 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:22:14 crc kubenswrapper[4827]: I0126 09:22:14.333003 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2dq2"] Jan 26 09:22:15 crc kubenswrapper[4827]: I0126 09:22:15.716531 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" path="/var/lib/kubelet/pods/85d0e0f7-5fb6-4aed-8b17-8a44107d703c/volumes" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.079297 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.087876 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64d1c33b-eace-4919-be5d-463f9621036a-cert\") pod \"infra-operator-controller-manager-694cf4f878-skgxf\" (UID: \"64d1c33b-eace-4919-be5d-463f9621036a\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.348121 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gd79r" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.354849 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.794624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:17 crc kubenswrapper[4827]: I0126 09:22:17.800514 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87aea9ac-4117-4870-81a9-44adabc28383-cert\") pod \"openstack-baremetal-operator-controller-manager-848957f4b4lzc5x\" (UID: \"87aea9ac-4117-4870-81a9-44adabc28383\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.058110 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p4rq7" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.066140 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.404609 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.404766 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.416646 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-metrics-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.416891 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ba78edc-c408-4071-ac8f-432e12ebb708-webhook-certs\") pod \"openstack-operator-controller-manager-65d46cfd44-jsnhm\" (UID: \"8ba78edc-c408-4071-ac8f-432e12ebb708\") " pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.527334 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2hbrc" Jan 26 09:22:18 crc kubenswrapper[4827]: I0126 09:22:18.535706 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:19 crc kubenswrapper[4827]: E0126 09:22:19.799520 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 09:22:19 crc kubenswrapper[4827]: E0126 09:22:19.799727 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qk5wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-vq7vj_openstack-operators(565c65e3-ea09-4057-81de-381377042c19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:19 crc kubenswrapper[4827]: E0126 09:22:19.801442 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podUID="565c65e3-ea09-4057-81de-381377042c19" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.219254 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.220498 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-762ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-ldvbb_openstack-operators(84b85200-c9f6-4759-bb84-1513165fe742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.221871 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" podUID="84b85200-c9f6-4759-bb84-1513165fe742" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.311036 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" podUID="84b85200-c9f6-4759-bb84-1513165fe742" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.795177 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.795510 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2gq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-58bb7_openstack-operators(7eea6dea-82a0-4c66-a5a0-0b7d11878264): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:22:21 crc kubenswrapper[4827]: E0126 09:22:21.796950 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podUID="7eea6dea-82a0-4c66-a5a0-0b7d11878264" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.003458 4827 scope.go:117] "RemoveContainer" containerID="70b2fb266da547c71cec7c815b639a6aa6627f001008afdeb2605ea1b91ee734" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.188948 4827 scope.go:117] "RemoveContainer" containerID="44a120897bc0019da26df751ef1f5f115c85b7b29499f317fc135cbcf45e161e" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.251142 4827 scope.go:117] "RemoveContainer" containerID="df02f5da162df790902b2c166fe9369f6a3aaeccf4ad7663752bda9758b78b7f" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.344129 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" event={"ID":"1cb20984-f7df-4d0b-9434-86182d952bb1","Type":"ContainerStarted","Data":"afaba33eff85847438743d42ec99ba6667ec565cbc49b8240ba08ebafd24d402"} Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.344591 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.347097 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d"} Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.369619 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" podStartSLOduration=9.650286419 podStartE2EDuration="37.369604313s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.926670461 +0000 UTC m=+936.575342290" lastFinishedPulling="2026-01-26 09:22:15.645988365 +0000 UTC m=+964.294660184" observedRunningTime="2026-01-26 09:22:22.364044388 +0000 UTC m=+971.012716207" watchObservedRunningTime="2026-01-26 09:22:22.369604313 +0000 UTC m=+971.018276132" Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.516463 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf"] Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.706280 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x"] Jan 26 09:22:22 crc kubenswrapper[4827]: I0126 09:22:22.768033 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm"] Jan 26 09:22:22 crc kubenswrapper[4827]: W0126 09:22:22.810136 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d1c33b_eace_4919_be5d_463f9621036a.slice/crio-7bcd8675d2915910995766647afe0e6e84cc6d7c4c80ed9c602194d065ff52cc WatchSource:0}: Error finding container 7bcd8675d2915910995766647afe0e6e84cc6d7c4c80ed9c602194d065ff52cc: Status 404 returned error can't find the container with id 7bcd8675d2915910995766647afe0e6e84cc6d7c4c80ed9c602194d065ff52cc Jan 26 09:22:22 crc kubenswrapper[4827]: W0126 09:22:22.820560 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ba78edc_c408_4071_ac8f_432e12ebb708.slice/crio-b2c67d96161270444b05790a4ab025978d9d948c1f566a4ae6451fa679aa3155 WatchSource:0}: Error finding container b2c67d96161270444b05790a4ab025978d9d948c1f566a4ae6451fa679aa3155: Status 404 returned error can't find the container with id b2c67d96161270444b05790a4ab025978d9d948c1f566a4ae6451fa679aa3155 Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.354863 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" event={"ID":"424c27d6-31d7-4a37-a7ef-c89099773070","Type":"ContainerStarted","Data":"28b0006241973ae753c1918a221a57050aa0fb16b90d50721bf38047b0d34554"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.355764 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.357032 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" event={"ID":"8ba78edc-c408-4071-ac8f-432e12ebb708","Type":"ContainerStarted","Data":"d8f6fb88fe5e10a2fe751e6653cbc21b4c1be32f169b2e238f1a0b1bae61a7cc"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.357049 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" event={"ID":"8ba78edc-c408-4071-ac8f-432e12ebb708","Type":"ContainerStarted","Data":"b2c67d96161270444b05790a4ab025978d9d948c1f566a4ae6451fa679aa3155"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.357428 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.358442 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" event={"ID":"58431f1d-bbf1-459c-9f79-39c94712b9d7","Type":"ContainerStarted","Data":"5654083ed5a2fe0f5a1bfbd204710955c0deb0d852f146aaac21730c87cc8141"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.358783 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.359900 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" event={"ID":"7fa19e2b-55c2-4e72-882a-eb4437b37c50","Type":"ContainerStarted","Data":"c112a449895dddf8fb94fb9ed02379b4a2bcb436298068abf0c6c8afd3eba743"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.360204 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.361551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" event={"ID":"9f1d37d2-59af-4a07-8d64-f1636eee3929","Type":"ContainerStarted","Data":"7ff018b4fd6ad345f6e9580cc1990bf0cb5f6b7b3d15c66d27d07489d3619498"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.361910 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.362748 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" event={"ID":"64d1c33b-eace-4919-be5d-463f9621036a","Type":"ContainerStarted","Data":"7bcd8675d2915910995766647afe0e6e84cc6d7c4c80ed9c602194d065ff52cc"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.380480 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" event={"ID":"7588c42e-08d0-4c2d-b62d-07fc7257cf8f","Type":"ContainerStarted","Data":"fc7d9291c4da17449ba7ee587745a16da71e13a9536c4ddc36bf0dc08af147ba"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.381152 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.382544 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" event={"ID":"eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b","Type":"ContainerStarted","Data":"f0328477a084ec86f3e7135c752b528fc142da4511e103450a6b3e34563a04d4"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.382907 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.384179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" event={"ID":"4b99eea5-fc5a-4441-8858-1a500c49c429","Type":"ContainerStarted","Data":"6752550c8f5b9ba049780de8c1ae1184192bfb316090feb29a61f4ab50854934"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.384547 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.386047 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" event={"ID":"2e2bf61f-063e-4fa4-aa92-6c14ee83fc66","Type":"ContainerStarted","Data":"b212fc36f486144ac29c1f839cf226e10f18fa7926fc1649396e497f3496cd2e"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.386360 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.387646 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" event={"ID":"571aa666-d430-47aa-a48b-91b5a2555723","Type":"ContainerStarted","Data":"5a1c725745d64296621fe11a1c360575347deec468d8b3095bad5eb81f72b952"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.387981 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.392555 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" event={"ID":"87aea9ac-4117-4870-81a9-44adabc28383","Type":"ContainerStarted","Data":"08d985945d0d5fba6cbb560e72e386d1cf97476e0f324f79bbab8cf6f0da4b8e"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.394755 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerStarted","Data":"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713"} Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.400458 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" podStartSLOduration=4.391680644 podStartE2EDuration="38.400436215s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.032872572 +0000 UTC m=+936.681544391" lastFinishedPulling="2026-01-26 09:22:22.041628143 +0000 UTC m=+970.690299962" observedRunningTime="2026-01-26 09:22:23.390873249 +0000 UTC m=+972.039545068" watchObservedRunningTime="2026-01-26 09:22:23.400436215 +0000 UTC m=+972.049108034" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.424314 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" podStartSLOduration=9.693264272 podStartE2EDuration="38.424297048s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:46.91497103 +0000 UTC m=+935.563642849" lastFinishedPulling="2026-01-26 09:22:15.646003806 +0000 UTC m=+964.294675625" observedRunningTime="2026-01-26 09:22:23.421739287 +0000 UTC m=+972.070411106" watchObservedRunningTime="2026-01-26 09:22:23.424297048 +0000 UTC m=+972.072968867" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.454286 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" podStartSLOduration=10.896105104 podStartE2EDuration="38.454268111s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.08790702 +0000 UTC m=+936.736578839" lastFinishedPulling="2026-01-26 09:22:15.646070027 +0000 UTC m=+964.294741846" observedRunningTime="2026-01-26 09:22:23.450157807 +0000 UTC m=+972.098829636" watchObservedRunningTime="2026-01-26 09:22:23.454268111 +0000 UTC m=+972.102939930" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.475568 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" podStartSLOduration=4.614782322 podStartE2EDuration="38.475546842s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.103782851 +0000 UTC m=+936.752454670" lastFinishedPulling="2026-01-26 09:22:21.964547361 +0000 UTC m=+970.613219190" observedRunningTime="2026-01-26 09:22:23.470679237 +0000 UTC m=+972.119351056" watchObservedRunningTime="2026-01-26 09:22:23.475546842 +0000 UTC m=+972.124218661" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.499318 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" podStartSLOduration=3.47566423 podStartE2EDuration="38.499300061s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.026751685 +0000 UTC m=+935.675423504" lastFinishedPulling="2026-01-26 09:22:22.050387516 +0000 UTC m=+970.699059335" observedRunningTime="2026-01-26 09:22:23.498676284 +0000 UTC m=+972.147348103" watchObservedRunningTime="2026-01-26 09:22:23.499300061 +0000 UTC m=+972.147971880" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.523700 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" podStartSLOduration=4.32566453 podStartE2EDuration="38.523682389s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.832418213 +0000 UTC m=+936.481090032" lastFinishedPulling="2026-01-26 09:22:22.030436062 +0000 UTC m=+970.679107891" observedRunningTime="2026-01-26 09:22:23.521003994 +0000 UTC m=+972.169675803" watchObservedRunningTime="2026-01-26 09:22:23.523682389 +0000 UTC m=+972.172354208" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.555485 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" podStartSLOduration=4.517218054 podStartE2EDuration="39.555470421s" podCreationTimestamp="2026-01-26 09:21:44 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.015305107 +0000 UTC m=+935.663976916" lastFinishedPulling="2026-01-26 09:22:22.053557464 +0000 UTC m=+970.702229283" observedRunningTime="2026-01-26 09:22:23.551497452 +0000 UTC m=+972.200169281" watchObservedRunningTime="2026-01-26 09:22:23.555470421 +0000 UTC m=+972.204142240" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.574269 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" podStartSLOduration=3.114880929 podStartE2EDuration="38.574252563s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.360236958 +0000 UTC m=+936.008908777" lastFinishedPulling="2026-01-26 09:22:22.819608592 +0000 UTC m=+971.468280411" observedRunningTime="2026-01-26 09:22:23.570446767 +0000 UTC m=+972.219118586" watchObservedRunningTime="2026-01-26 09:22:23.574252563 +0000 UTC m=+972.222924382" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.641243 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" podStartSLOduration=4.636935937 podStartE2EDuration="38.641225324s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.036640616 +0000 UTC m=+936.685312435" lastFinishedPulling="2026-01-26 09:22:22.040930003 +0000 UTC m=+970.689601822" observedRunningTime="2026-01-26 09:22:23.598708753 +0000 UTC m=+972.247380572" watchObservedRunningTime="2026-01-26 09:22:23.641225324 +0000 UTC m=+972.289897143" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.641756 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" podStartSLOduration=38.641751858 podStartE2EDuration="38.641751858s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:22:23.638849148 +0000 UTC m=+972.287520967" watchObservedRunningTime="2026-01-26 09:22:23.641751858 +0000 UTC m=+972.290423677" Jan 26 09:22:23 crc kubenswrapper[4827]: I0126 09:22:23.726605 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9m8c" podStartSLOduration=4.942580522 podStartE2EDuration="37.726589834s" podCreationTimestamp="2026-01-26 09:21:46 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.994346018 +0000 UTC m=+937.643017837" lastFinishedPulling="2026-01-26 09:22:21.77835533 +0000 UTC m=+970.427027149" observedRunningTime="2026-01-26 09:22:23.699948535 +0000 UTC m=+972.348620364" watchObservedRunningTime="2026-01-26 09:22:23.726589834 +0000 UTC m=+972.375261653" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.157237 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:24 crc kubenswrapper[4827]: E0126 09:22:24.157799 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="extract-utilities" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.157812 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="extract-utilities" Jan 26 09:22:24 crc kubenswrapper[4827]: E0126 09:22:24.157826 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.157833 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" Jan 26 09:22:24 crc kubenswrapper[4827]: E0126 09:22:24.157847 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="extract-content" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.157853 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="extract-content" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.157981 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d0e0f7-5fb6-4aed-8b17-8a44107d703c" containerName="registry-server" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.158960 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.179486 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.203974 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.204077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4n2\" (UniqueName: \"kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.204126 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.305247 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z4n2\" (UniqueName: \"kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.305317 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.305356 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.306193 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.306242 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.338482 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z4n2\" (UniqueName: \"kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2\") pod \"community-operators-25z2h\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.410849 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" event={"ID":"52992458-b4f0-409b-8be0-96a545a80839","Type":"ContainerStarted","Data":"bf6cc6619370e1d890d21129c1908eb4a409ca49f5924b582b8fb02d84559619"} Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.411853 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.415309 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" event={"ID":"3759f1d2-941a-496f-a51e-aa2bd6fbeeec","Type":"ContainerStarted","Data":"348d334026aa27dad79c40baec62f7e81973663b4086cfb04c4a9e26e05af729"} Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.415674 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.441974 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" podStartSLOduration=2.720052153 podStartE2EDuration="39.441957606s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.211022953 +0000 UTC m=+935.859694772" lastFinishedPulling="2026-01-26 09:22:23.932928406 +0000 UTC m=+972.581600225" observedRunningTime="2026-01-26 09:22:24.438833098 +0000 UTC m=+973.087504917" watchObservedRunningTime="2026-01-26 09:22:24.441957606 +0000 UTC m=+973.090629425" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.479321 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:24 crc kubenswrapper[4827]: I0126 09:22:24.490594 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" podStartSLOduration=2.378115095 podStartE2EDuration="39.490578576s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:46.885480911 +0000 UTC m=+935.534152730" lastFinishedPulling="2026-01-26 09:22:23.997944392 +0000 UTC m=+972.646616211" observedRunningTime="2026-01-26 09:22:24.484718863 +0000 UTC m=+973.133390682" watchObservedRunningTime="2026-01-26 09:22:24.490578576 +0000 UTC m=+973.139250385" Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.137881 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:25 crc kubenswrapper[4827]: W0126 09:22:25.146185 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992c4f04_66fa_4e96_958f_efe16d96c921.slice/crio-44e2fae10b501be5efdd5c209980fcfb62a22a836d9e28a0e2406fb2c2b74a10 WatchSource:0}: Error finding container 44e2fae10b501be5efdd5c209980fcfb62a22a836d9e28a0e2406fb2c2b74a10: Status 404 returned error can't find the container with id 44e2fae10b501be5efdd5c209980fcfb62a22a836d9e28a0e2406fb2c2b74a10 Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.442420 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" event={"ID":"12001a2b-7c86-41a4-ba17-a0d586aea6e5","Type":"ContainerStarted","Data":"61f3f03f88c9e9c6c725bd626865fc12dbdb14b43e5216e2180223f61e2285dd"} Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.443330 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.448695 4827 generic.go:334] "Generic (PLEG): container finished" podID="992c4f04-66fa-4e96-958f-efe16d96c921" containerID="9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be" exitCode=0 Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.448751 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerDied","Data":"9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be"} Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.448963 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerStarted","Data":"44e2fae10b501be5efdd5c209980fcfb62a22a836d9e28a0e2406fb2c2b74a10"} Jan 26 09:22:25 crc kubenswrapper[4827]: I0126 09:22:25.485347 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" podStartSLOduration=4.147519652 podStartE2EDuration="40.485326576s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.038433396 +0000 UTC m=+936.687105215" lastFinishedPulling="2026-01-26 09:22:24.37624032 +0000 UTC m=+973.024912139" observedRunningTime="2026-01-26 09:22:25.470124894 +0000 UTC m=+974.118796713" watchObservedRunningTime="2026-01-26 09:22:25.485326576 +0000 UTC m=+974.133998395" Jan 26 09:22:26 crc kubenswrapper[4827]: I0126 09:22:26.456545 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" event={"ID":"90405ca9-cf52-4ad1-94b9-54aacb8e5708","Type":"ContainerStarted","Data":"4e76bc471c7bb7c921159e8cf658f9dcad0069445560d77f9e2b133e9e9bd8ec"} Jan 26 09:22:26 crc kubenswrapper[4827]: I0126 09:22:26.457075 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:22:26 crc kubenswrapper[4827]: I0126 09:22:26.721360 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" podStartSLOduration=3.514159079 podStartE2EDuration="41.721341328s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.190424591 +0000 UTC m=+935.839096410" lastFinishedPulling="2026-01-26 09:22:25.39760684 +0000 UTC m=+974.046278659" observedRunningTime="2026-01-26 09:22:26.477126355 +0000 UTC m=+975.125798174" watchObservedRunningTime="2026-01-26 09:22:26.721341328 +0000 UTC m=+975.370013147" Jan 26 09:22:27 crc kubenswrapper[4827]: I0126 09:22:27.258020 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:27 crc kubenswrapper[4827]: I0126 09:22:27.258070 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:28 crc kubenswrapper[4827]: I0126 09:22:28.299794 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r9m8c" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="registry-server" probeResult="failure" output=< Jan 26 09:22:28 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:22:28 crc kubenswrapper[4827]: > Jan 26 09:22:28 crc kubenswrapper[4827]: I0126 09:22:28.541326 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65d46cfd44-jsnhm" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.485021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" event={"ID":"86d77aba-3a0a-43d5-b592-2c45d866515c","Type":"ContainerStarted","Data":"85159aa065555390695cec6bb364245f84dee614447afe3155bb4f011eb0e665"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.485676 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.486445 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" event={"ID":"87aea9ac-4117-4870-81a9-44adabc28383","Type":"ContainerStarted","Data":"e3822be2ef5aec2ad0ab0242b75012ee88f306f8732c58e667dc29c5e934718f"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.486585 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.487613 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" event={"ID":"64d1c33b-eace-4919-be5d-463f9621036a","Type":"ContainerStarted","Data":"ff18c929ca8042890e3dbd90348f163017e9f477ba7bf92ac240b3163bcffbeb"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.487939 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.488884 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" event={"ID":"e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4","Type":"ContainerStarted","Data":"b1224032ad18470004d2950250a846b002352f469cfaaeb681b2d2f805dd4fb7"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.489259 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.490887 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" event={"ID":"c3b4b2f4-2b69-4c36-b967-27c70f7a5767","Type":"ContainerStarted","Data":"89269e32fdb36b45f1f820d21ffb51f54ef1ab98591da9de0e1c126bf6610756"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.491440 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.492892 4827 generic.go:334] "Generic (PLEG): container finished" podID="992c4f04-66fa-4e96-958f-efe16d96c921" containerID="f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087" exitCode=0 Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.492928 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerDied","Data":"f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087"} Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.555506 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" podStartSLOduration=3.262727046 podStartE2EDuration="45.555488687s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.25158036 +0000 UTC m=+935.900252179" lastFinishedPulling="2026-01-26 09:22:29.544342001 +0000 UTC m=+978.193013820" observedRunningTime="2026-01-26 09:22:30.512196374 +0000 UTC m=+979.160868193" watchObservedRunningTime="2026-01-26 09:22:30.555488687 +0000 UTC m=+979.204160516" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.557529 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" podStartSLOduration=38.839943553 podStartE2EDuration="45.557520064s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:22:22.825033852 +0000 UTC m=+971.473705671" lastFinishedPulling="2026-01-26 09:22:29.542610363 +0000 UTC m=+978.191282182" observedRunningTime="2026-01-26 09:22:30.548855063 +0000 UTC m=+979.197526882" watchObservedRunningTime="2026-01-26 09:22:30.557520064 +0000 UTC m=+979.206191893" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.571149 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" podStartSLOduration=38.84635343 podStartE2EDuration="45.571117421s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:22:22.81307208 +0000 UTC m=+971.461743899" lastFinishedPulling="2026-01-26 09:22:29.537836071 +0000 UTC m=+978.186507890" observedRunningTime="2026-01-26 09:22:30.567612084 +0000 UTC m=+979.216283913" watchObservedRunningTime="2026-01-26 09:22:30.571117421 +0000 UTC m=+979.219789240" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.606639 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" podStartSLOduration=3.976378218 podStartE2EDuration="45.606617747s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.913073294 +0000 UTC m=+936.561745113" lastFinishedPulling="2026-01-26 09:22:29.543312823 +0000 UTC m=+978.191984642" observedRunningTime="2026-01-26 09:22:30.601646499 +0000 UTC m=+979.250318318" watchObservedRunningTime="2026-01-26 09:22:30.606617747 +0000 UTC m=+979.255289566" Jan 26 09:22:30 crc kubenswrapper[4827]: I0126 09:22:30.630393 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" podStartSLOduration=4.388123276 podStartE2EDuration="45.630376778s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.300742181 +0000 UTC m=+936.949414000" lastFinishedPulling="2026-01-26 09:22:29.542995683 +0000 UTC m=+978.191667502" observedRunningTime="2026-01-26 09:22:30.626784257 +0000 UTC m=+979.275456076" watchObservedRunningTime="2026-01-26 09:22:30.630376778 +0000 UTC m=+979.279048597" Jan 26 09:22:30 crc kubenswrapper[4827]: E0126 09:22:30.703847 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podUID="565c65e3-ea09-4057-81de-381377042c19" Jan 26 09:22:32 crc kubenswrapper[4827]: I0126 09:22:32.508832 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerStarted","Data":"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3"} Jan 26 09:22:32 crc kubenswrapper[4827]: I0126 09:22:32.537671 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25z2h" podStartSLOduration=2.433527504 podStartE2EDuration="8.537655404s" podCreationTimestamp="2026-01-26 09:22:24 +0000 UTC" firstStartedPulling="2026-01-26 09:22:25.450732666 +0000 UTC m=+974.099404485" lastFinishedPulling="2026-01-26 09:22:31.554860566 +0000 UTC m=+980.203532385" observedRunningTime="2026-01-26 09:22:32.532205764 +0000 UTC m=+981.180877583" watchObservedRunningTime="2026-01-26 09:22:32.537655404 +0000 UTC m=+981.186327223" Jan 26 09:22:34 crc kubenswrapper[4827]: I0126 09:22:34.480223 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:34 crc kubenswrapper[4827]: I0126 09:22:34.480533 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:34 crc kubenswrapper[4827]: I0126 09:22:34.523574 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.353581 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-82zp4" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.369420 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7d95c" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.416366 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g47s2" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.470005 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-w42nm" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.505716 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-f4pjj" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.525088 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hj2q8" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.529658 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" event={"ID":"84b85200-c9f6-4759-bb84-1513165fe742","Type":"ContainerStarted","Data":"3b5e9c8a0ee0cde023ab27a0ec808a2db8eff2d853875dcde1fe52eae21ffaab"} Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.530269 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.571103 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" podStartSLOduration=2.736690894 podStartE2EDuration="50.571087752s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:47.321519593 +0000 UTC m=+935.970191402" lastFinishedPulling="2026-01-26 09:22:35.155916441 +0000 UTC m=+983.804588260" observedRunningTime="2026-01-26 09:22:35.567431181 +0000 UTC m=+984.216103000" watchObservedRunningTime="2026-01-26 09:22:35.571087752 +0000 UTC m=+984.219759561" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.733662 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tmb5m" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.844226 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp" Jan 26 09:22:35 crc kubenswrapper[4827]: I0126 09:22:35.924038 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-96nv5" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.059218 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9g9tb" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.193122 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-5tq7r" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.242351 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l4gjk" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.279380 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-9qw4q" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.328216 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cb96z" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.340352 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-mbt6s" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.360367 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rzc28" Jan 26 09:22:36 crc kubenswrapper[4827]: I0126 09:22:36.410685 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fcj6p" Jan 26 09:22:36 crc kubenswrapper[4827]: E0126 09:22:36.704761 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podUID="7eea6dea-82a0-4c66-a5a0-0b7d11878264" Jan 26 09:22:37 crc kubenswrapper[4827]: I0126 09:22:37.303027 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:37 crc kubenswrapper[4827]: I0126 09:22:37.350549 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:37 crc kubenswrapper[4827]: I0126 09:22:37.361671 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-skgxf" Jan 26 09:22:37 crc kubenswrapper[4827]: I0126 09:22:37.542125 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:22:38 crc kubenswrapper[4827]: I0126 09:22:38.071102 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-848957f4b4lzc5x" Jan 26 09:22:38 crc kubenswrapper[4827]: I0126 09:22:38.585466 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9m8c" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="registry-server" containerID="cri-o://eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713" gracePeriod=2 Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.041025 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.145634 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content\") pod \"e7507e2f-81c3-496e-a03d-5117836c520c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.145731 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities\") pod \"e7507e2f-81c3-496e-a03d-5117836c520c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.145862 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v2d4\" (UniqueName: \"kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4\") pod \"e7507e2f-81c3-496e-a03d-5117836c520c\" (UID: \"e7507e2f-81c3-496e-a03d-5117836c520c\") " Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.147406 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities" (OuterVolumeSpecName: "utilities") pod "e7507e2f-81c3-496e-a03d-5117836c520c" (UID: "e7507e2f-81c3-496e-a03d-5117836c520c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.155781 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4" (OuterVolumeSpecName: "kube-api-access-6v2d4") pod "e7507e2f-81c3-496e-a03d-5117836c520c" (UID: "e7507e2f-81c3-496e-a03d-5117836c520c"). InnerVolumeSpecName "kube-api-access-6v2d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.190431 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7507e2f-81c3-496e-a03d-5117836c520c" (UID: "e7507e2f-81c3-496e-a03d-5117836c520c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.247915 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v2d4\" (UniqueName: \"kubernetes.io/projected/e7507e2f-81c3-496e-a03d-5117836c520c-kube-api-access-6v2d4\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.248144 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.248155 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7507e2f-81c3-496e-a03d-5117836c520c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.592507 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7507e2f-81c3-496e-a03d-5117836c520c" containerID="eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713" exitCode=0 Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.592551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerDied","Data":"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713"} Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.592582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9m8c" event={"ID":"e7507e2f-81c3-496e-a03d-5117836c520c","Type":"ContainerDied","Data":"82de115ac2d6eab7ff2b7ecca6f6b606e97b1dc4c15637df0c25d38875119231"} Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.592604 4827 scope.go:117] "RemoveContainer" containerID="eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.593099 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9m8c" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.619949 4827 scope.go:117] "RemoveContainer" containerID="e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.631493 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.636727 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9m8c"] Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.665721 4827 scope.go:117] "RemoveContainer" containerID="99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.681259 4827 scope.go:117] "RemoveContainer" containerID="eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713" Jan 26 09:22:39 crc kubenswrapper[4827]: E0126 09:22:39.681693 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713\": container with ID starting with eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713 not found: ID does not exist" containerID="eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.681721 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713"} err="failed to get container status \"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713\": rpc error: code = NotFound desc = could not find container \"eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713\": container with ID starting with eacf24814e3c7d0ad7e3043e80ea61ee0ab1cf88e3c939244a8fedb56795b713 not found: ID does not exist" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.681740 4827 scope.go:117] "RemoveContainer" containerID="e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857" Jan 26 09:22:39 crc kubenswrapper[4827]: E0126 09:22:39.682005 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857\": container with ID starting with e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857 not found: ID does not exist" containerID="e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.682078 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857"} err="failed to get container status \"e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857\": rpc error: code = NotFound desc = could not find container \"e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857\": container with ID starting with e985361f1d7ea57cb9a9c3b6324969c8660dedf63aee2a9f6c4429619aaf7857 not found: ID does not exist" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.682093 4827 scope.go:117] "RemoveContainer" containerID="99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e" Jan 26 09:22:39 crc kubenswrapper[4827]: E0126 09:22:39.682323 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e\": container with ID starting with 99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e not found: ID does not exist" containerID="99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.682351 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e"} err="failed to get container status \"99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e\": rpc error: code = NotFound desc = could not find container \"99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e\": container with ID starting with 99944e353080bef46b0cea95b5ac5a91d91c906e686ce8addc9a4842d844de0e not found: ID does not exist" Jan 26 09:22:39 crc kubenswrapper[4827]: I0126 09:22:39.716501 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" path="/var/lib/kubelet/pods/e7507e2f-81c3-496e-a03d-5117836c520c/volumes" Jan 26 09:22:44 crc kubenswrapper[4827]: I0126 09:22:44.526980 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:44 crc kubenswrapper[4827]: I0126 09:22:44.579974 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:44 crc kubenswrapper[4827]: I0126 09:22:44.623086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" event={"ID":"565c65e3-ea09-4057-81de-381377042c19","Type":"ContainerStarted","Data":"c94c60a10058706487dc2ce92533edef675f724aad1d788f82f656252276c96e"} Jan 26 09:22:44 crc kubenswrapper[4827]: I0126 09:22:44.623327 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25z2h" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="registry-server" containerID="cri-o://4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3" gracePeriod=2 Jan 26 09:22:44 crc kubenswrapper[4827]: I0126 09:22:44.643357 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" podStartSLOduration=3.460632173 podStartE2EDuration="59.643338737s" podCreationTimestamp="2026-01-26 09:21:45 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.116674199 +0000 UTC m=+936.765346018" lastFinishedPulling="2026-01-26 09:22:44.299380763 +0000 UTC m=+992.948052582" observedRunningTime="2026-01-26 09:22:44.641348112 +0000 UTC m=+993.290019931" watchObservedRunningTime="2026-01-26 09:22:44.643338737 +0000 UTC m=+993.292010566" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.026753 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.126591 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities\") pod \"992c4f04-66fa-4e96-958f-efe16d96c921\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.126672 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content\") pod \"992c4f04-66fa-4e96-958f-efe16d96c921\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.126749 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z4n2\" (UniqueName: \"kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2\") pod \"992c4f04-66fa-4e96-958f-efe16d96c921\" (UID: \"992c4f04-66fa-4e96-958f-efe16d96c921\") " Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.127366 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities" (OuterVolumeSpecName: "utilities") pod "992c4f04-66fa-4e96-958f-efe16d96c921" (UID: "992c4f04-66fa-4e96-958f-efe16d96c921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.149399 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2" (OuterVolumeSpecName: "kube-api-access-2z4n2") pod "992c4f04-66fa-4e96-958f-efe16d96c921" (UID: "992c4f04-66fa-4e96-958f-efe16d96c921"). InnerVolumeSpecName "kube-api-access-2z4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.178099 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "992c4f04-66fa-4e96-958f-efe16d96c921" (UID: "992c4f04-66fa-4e96-958f-efe16d96c921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.231781 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.232617 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z4n2\" (UniqueName: \"kubernetes.io/projected/992c4f04-66fa-4e96-958f-efe16d96c921-kube-api-access-2z4n2\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.232716 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992c4f04-66fa-4e96-958f-efe16d96c921-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.630038 4827 generic.go:334] "Generic (PLEG): container finished" podID="992c4f04-66fa-4e96-958f-efe16d96c921" containerID="4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3" exitCode=0 Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.630091 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerDied","Data":"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3"} Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.630123 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2h" event={"ID":"992c4f04-66fa-4e96-958f-efe16d96c921","Type":"ContainerDied","Data":"44e2fae10b501be5efdd5c209980fcfb62a22a836d9e28a0e2406fb2c2b74a10"} Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.630140 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2h" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.630146 4827 scope.go:117] "RemoveContainer" containerID="4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.654600 4827 scope.go:117] "RemoveContainer" containerID="f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.679572 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.682028 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ldvbb" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.692116 4827 scope.go:117] "RemoveContainer" containerID="9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.739906 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25z2h"] Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.758754 4827 scope.go:117] "RemoveContainer" containerID="4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3" Jan 26 09:22:45 crc kubenswrapper[4827]: E0126 09:22:45.759132 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3\": container with ID starting with 4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3 not found: ID does not exist" containerID="4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.759161 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3"} err="failed to get container status \"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3\": rpc error: code = NotFound desc = could not find container \"4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3\": container with ID starting with 4e7302d0628918306658f46ed9adfbcda3db9f6b661627d8a147800c1206dce3 not found: ID does not exist" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.759187 4827 scope.go:117] "RemoveContainer" containerID="f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087" Jan 26 09:22:45 crc kubenswrapper[4827]: E0126 09:22:45.759377 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087\": container with ID starting with f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087 not found: ID does not exist" containerID="f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.759397 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087"} err="failed to get container status \"f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087\": rpc error: code = NotFound desc = could not find container \"f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087\": container with ID starting with f74fdd4d6643988000bb357e1d74c3b185818245057b36979252b27b389ef087 not found: ID does not exist" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.759413 4827 scope.go:117] "RemoveContainer" containerID="9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be" Jan 26 09:22:45 crc kubenswrapper[4827]: E0126 09:22:45.759587 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be\": container with ID starting with 9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be not found: ID does not exist" containerID="9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be" Jan 26 09:22:45 crc kubenswrapper[4827]: I0126 09:22:45.759607 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be"} err="failed to get container status \"9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be\": rpc error: code = NotFound desc = could not find container \"9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be\": container with ID starting with 9fdb9663a2fcce22ff676987a8e7394e89698476473648e16b603b492915f6be not found: ID does not exist" Jan 26 09:22:46 crc kubenswrapper[4827]: I0126 09:22:46.298473 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:22:47 crc kubenswrapper[4827]: I0126 09:22:47.712670 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" path="/var/lib/kubelet/pods/992c4f04-66fa-4e96-958f-efe16d96c921/volumes" Jan 26 09:22:50 crc kubenswrapper[4827]: I0126 09:22:50.705311 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:22:51 crc kubenswrapper[4827]: I0126 09:22:51.672597 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" event={"ID":"7eea6dea-82a0-4c66-a5a0-0b7d11878264","Type":"ContainerStarted","Data":"a590070516964688f9575b773595c2b0889db3bf4c732bee4e7edc945e52c6c4"} Jan 26 09:22:51 crc kubenswrapper[4827]: I0126 09:22:51.692719 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-58bb7" podStartSLOduration=2.763212328 podStartE2EDuration="1m5.692699233s" podCreationTimestamp="2026-01-26 09:21:46 +0000 UTC" firstStartedPulling="2026-01-26 09:21:48.26212802 +0000 UTC m=+936.910799839" lastFinishedPulling="2026-01-26 09:22:51.191614925 +0000 UTC m=+999.840286744" observedRunningTime="2026-01-26 09:22:51.691600013 +0000 UTC m=+1000.340271832" watchObservedRunningTime="2026-01-26 09:22:51.692699233 +0000 UTC m=+1000.341371062" Jan 26 09:22:56 crc kubenswrapper[4827]: I0126 09:22:56.300020 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vq7vj" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.842691 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843574 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="extract-content" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843591 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="extract-content" Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843607 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="extract-utilities" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843616 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="extract-utilities" Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843652 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="extract-utilities" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843662 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="extract-utilities" Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843673 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="extract-content" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843680 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="extract-content" Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843692 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843699 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: E0126 09:23:12.843712 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843719 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843872 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7507e2f-81c3-496e-a03d-5117836c520c" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.843895 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="992c4f04-66fa-4e96-958f-efe16d96c921" containerName="registry-server" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.845917 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.850070 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.850271 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7mgfk" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.850465 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.852093 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.855661 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.923047 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsk8\" (UniqueName: \"kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.923103 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.931470 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.951136 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.951240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:12 crc kubenswrapper[4827]: I0126 09:23:12.953395 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.024679 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsk8\" (UniqueName: \"kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.024731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.025544 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.043867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsk8\" (UniqueName: \"kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8\") pod \"dnsmasq-dns-84bb9d8bd9-hnklz\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.125845 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmk8\" (UniqueName: \"kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.125925 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.126077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.172828 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.227387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmk8\" (UniqueName: \"kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.227447 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.227520 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.228290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.228293 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.248723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmk8\" (UniqueName: \"kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8\") pod \"dnsmasq-dns-5f854695bc-bq574\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.269901 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.646589 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.725416 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.834474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" event={"ID":"24a78cb5-f4d7-496b-865b-925dbceecc11","Type":"ContainerStarted","Data":"4dcf2778f378106a831387c289407d59c49d0ab013d829cf7f16a1b4cc45ad76"} Jan 26 09:23:13 crc kubenswrapper[4827]: I0126 09:23:13.835384 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-bq574" event={"ID":"36492635-cf3c-4bb4-9d2b-e9584899ec03","Type":"ContainerStarted","Data":"df9f29838654cc18d086dbcb5a30c249166f24be0189c6102da6004656479d96"} Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.612394 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.653336 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.654491 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.689706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.789496 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmdn\" (UniqueName: \"kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.789632 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.789673 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.890952 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmdn\" (UniqueName: \"kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.891041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.891073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.892032 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.892869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.934911 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmdn\" (UniqueName: \"kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn\") pod \"dnsmasq-dns-744ffd65bc-sz2tk\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.945181 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.973108 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.974282 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:15 crc kubenswrapper[4827]: I0126 09:23:15.989433 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.009676 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.095763 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.096476 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmg7r\" (UniqueName: \"kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.096716 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.198185 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.198248 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmg7r\" (UniqueName: \"kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.198339 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.199316 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.199380 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.239514 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmg7r\" (UniqueName: \"kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r\") pod \"dnsmasq-dns-95f5f6995-vjjcm\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.315878 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.653318 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:16 crc kubenswrapper[4827]: W0126 09:23:16.674481 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59909240_fde2_4bdd_b0f4_d02985da5fc2.slice/crio-3528ce749967c1019bfadcb271ea581258d4218019440d5bfee69066c3efe745 WatchSource:0}: Error finding container 3528ce749967c1019bfadcb271ea581258d4218019440d5bfee69066c3efe745: Status 404 returned error can't find the container with id 3528ce749967c1019bfadcb271ea581258d4218019440d5bfee69066c3efe745 Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.816273 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.817696 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.819663 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.821765 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.822302 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.822447 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.823174 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.823303 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.823483 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.823631 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ghkwb" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.873869 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" event={"ID":"59909240-fde2-4bdd-b0f4-d02985da5fc2","Type":"ContainerStarted","Data":"3528ce749967c1019bfadcb271ea581258d4218019440d5bfee69066c3efe745"} Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.890534 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:23:16 crc kubenswrapper[4827]: W0126 09:23:16.907299 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc785dedf_5109_4e22_be36_b04a971c38e0.slice/crio-76d07c026b576b229a193a395c407346fc7f4c564a67d086d4cafa857b67e090 WatchSource:0}: Error finding container 76d07c026b576b229a193a395c407346fc7f4c564a67d086d4cafa857b67e090: Status 404 returned error can't find the container with id 76d07c026b576b229a193a395c407346fc7f4c564a67d086d4cafa857b67e090 Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918802 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918852 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918890 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918911 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918938 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qwb\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918971 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.918991 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.919013 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.919035 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.919071 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:16 crc kubenswrapper[4827]: I0126 09:23:16.919104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020719 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020745 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qwb\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020861 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020886 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020916 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020944 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.020979 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.021011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.022016 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.024915 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.027335 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.028531 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.029014 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.029048 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.038196 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.040157 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.052297 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qwb\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.054225 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.064194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.080007 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.162017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.166810 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.180159 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.181047 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184177 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184371 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184445 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184558 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184381 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vt4gm" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184677 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.184989 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327446 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327788 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327828 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327844 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgmk\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327883 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.327915 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.328020 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.328118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.328138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.328165 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.328304 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429260 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429305 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429323 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429339 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgmk\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429388 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.429999 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.430119 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.430145 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.430172 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.431085 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.432431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.432532 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.432573 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.432746 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.435655 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.435867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.439224 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.440299 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.461517 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgmk\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.471238 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.472480 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.512482 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.782202 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:23:17 crc kubenswrapper[4827]: W0126 09:23:17.798845 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aa4b7d1_606d_4833_9b9c_a2c78297c312.slice/crio-f938ab181dd124c1c3a05206fe99758a4e9a8b64d2e6d618095924ac7ae7ec9d WatchSource:0}: Error finding container f938ab181dd124c1c3a05206fe99758a4e9a8b64d2e6d618095924ac7ae7ec9d: Status 404 returned error can't find the container with id f938ab181dd124c1c3a05206fe99758a4e9a8b64d2e6d618095924ac7ae7ec9d Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.889270 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerStarted","Data":"f938ab181dd124c1c3a05206fe99758a4e9a8b64d2e6d618095924ac7ae7ec9d"} Jan 26 09:23:17 crc kubenswrapper[4827]: I0126 09:23:17.891670 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" event={"ID":"c785dedf-5109-4e22-be36-b04a971c38e0","Type":"ContainerStarted","Data":"76d07c026b576b229a193a395c407346fc7f4c564a67d086d4cafa857b67e090"} Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.055253 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.337427 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.339397 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.339932 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.342705 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8nzc2" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.342962 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.344169 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.351761 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.352041 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.449891 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.449974 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450021 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450076 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450105 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7qh\" (UniqueName: \"kubernetes.io/projected/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kube-api-access-6j7qh\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450344 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450424 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.450477 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.551903 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.551946 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.551968 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.551990 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7qh\" (UniqueName: \"kubernetes.io/projected/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kube-api-access-6j7qh\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.552049 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.552078 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.552096 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.552686 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.553416 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.554367 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.554410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.555611 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.555956 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.557558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.579244 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.582196 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7qh\" (UniqueName: \"kubernetes.io/projected/3f89d129-88aa-4c87-ac49-33e52bd1cd4c-kube-api-access-6j7qh\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.595841 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3f89d129-88aa-4c87-ac49-33e52bd1cd4c\") " pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.681012 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 09:23:18 crc kubenswrapper[4827]: I0126 09:23:18.935675 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerStarted","Data":"46663b269d0c3c9d1d09d24e5cdd2bec71e0b25d2f6d4b2547643a48d278c4c4"} Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.733276 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.734889 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.739862 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.740100 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.740321 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bnnmp" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.743885 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.744456 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868608 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868682 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6p6d\" (UniqueName: \"kubernetes.io/projected/b1cad67f-3855-4463-980d-5372c7185eef-kube-api-access-x6p6d\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868782 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868823 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868846 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868874 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1cad67f-3855-4463-980d-5372c7185eef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.868965 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970607 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970678 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1cad67f-3855-4463-980d-5372c7185eef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970736 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970754 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970810 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6p6d\" (UniqueName: \"kubernetes.io/projected/b1cad67f-3855-4463-980d-5372c7185eef-kube-api-access-x6p6d\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.970828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.971013 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.971244 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1cad67f-3855-4463-980d-5372c7185eef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.971748 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.971848 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.972732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1cad67f-3855-4463-980d-5372c7185eef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.976579 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.983188 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.986568 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dxl2s" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.987277 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.990152 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.990921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 09:23:19 crc kubenswrapper[4827]: I0126 09:23:19.997444 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.002208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cad67f-3855-4463-980d-5372c7185eef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.028439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.029380 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6p6d\" (UniqueName: \"kubernetes.io/projected/b1cad67f-3855-4463-980d-5372c7185eef-kube-api-access-x6p6d\") pod \"openstack-cell1-galera-0\" (UID: \"b1cad67f-3855-4463-980d-5372c7185eef\") " pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.067219 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.072300 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-kolla-config\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.072401 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-memcached-tls-certs\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.072478 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-config-data\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.072498 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-combined-ca-bundle\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.072529 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsn6g\" (UniqueName: \"kubernetes.io/projected/da6ed528-8ee6-421d-a921-a9b6d1382d45-kube-api-access-nsn6g\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.173556 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-combined-ca-bundle\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.173599 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-config-data\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.173620 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsn6g\" (UniqueName: \"kubernetes.io/projected/da6ed528-8ee6-421d-a921-a9b6d1382d45-kube-api-access-nsn6g\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.173665 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-kolla-config\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.173745 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-memcached-tls-certs\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.174367 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-kolla-config\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.174641 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da6ed528-8ee6-421d-a921-a9b6d1382d45-config-data\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.176954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-memcached-tls-certs\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.179281 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6ed528-8ee6-421d-a921-a9b6d1382d45-combined-ca-bundle\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.195217 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsn6g\" (UniqueName: \"kubernetes.io/projected/da6ed528-8ee6-421d-a921-a9b6d1382d45-kube-api-access-nsn6g\") pod \"memcached-0\" (UID: \"da6ed528-8ee6-421d-a921-a9b6d1382d45\") " pod="openstack/memcached-0" Jan 26 09:23:20 crc kubenswrapper[4827]: I0126 09:23:20.405108 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.031561 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.033090 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.039744 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7czgc" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.043988 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.218747 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5pk\" (UniqueName: \"kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk\") pod \"kube-state-metrics-0\" (UID: \"05543fb3-7874-4393-a8da-c3f6f7e65029\") " pod="openstack/kube-state-metrics-0" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.321191 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5pk\" (UniqueName: \"kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk\") pod \"kube-state-metrics-0\" (UID: \"05543fb3-7874-4393-a8da-c3f6f7e65029\") " pod="openstack/kube-state-metrics-0" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.339509 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5pk\" (UniqueName: \"kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk\") pod \"kube-state-metrics-0\" (UID: \"05543fb3-7874-4393-a8da-c3f6f7e65029\") " pod="openstack/kube-state-metrics-0" Jan 26 09:23:22 crc kubenswrapper[4827]: I0126 09:23:22.349942 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.969125 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sjsvm"] Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.970610 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.973864 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.973906 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.974027 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qvn7d" Jan 26 09:23:24 crc kubenswrapper[4827]: I0126 09:23:24.983603 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sjsvm"] Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.033533 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6jn8j"] Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.052927 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6jn8j"] Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.053053 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166811 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/824497ea-421f-4928-83bd-908240595a4f-scripts\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166876 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-run\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166902 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60184b1a-f656-4b71-bf13-2953f715bc12-scripts\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166929 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-etc-ovs\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166967 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.166993 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2pvr\" (UniqueName: \"kubernetes.io/projected/60184b1a-f656-4b71-bf13-2953f715bc12-kube-api-access-d2pvr\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167022 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-log-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167055 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-log\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-lib\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167099 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6js8\" (UniqueName: \"kubernetes.io/projected/824497ea-421f-4928-83bd-908240595a4f-kube-api-access-l6js8\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167127 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-combined-ca-bundle\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.167182 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-ovn-controller-tls-certs\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.268615 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-ovn-controller-tls-certs\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.268777 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/824497ea-421f-4928-83bd-908240595a4f-scripts\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270017 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-run\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270679 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/824497ea-421f-4928-83bd-908240595a4f-scripts\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.268812 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-run\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270749 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60184b1a-f656-4b71-bf13-2953f715bc12-scripts\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270765 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-etc-ovs\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270800 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270816 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2pvr\" (UniqueName: \"kubernetes.io/projected/60184b1a-f656-4b71-bf13-2953f715bc12-kube-api-access-d2pvr\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270839 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-log-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270866 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-log\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270884 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-lib\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270899 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6js8\" (UniqueName: \"kubernetes.io/projected/824497ea-421f-4928-83bd-908240595a4f-kube-api-access-l6js8\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-combined-ca-bundle\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.270932 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.271044 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.271536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-log-ovn\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.271724 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-etc-ovs\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.271794 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60184b1a-f656-4b71-bf13-2953f715bc12-var-run\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.271873 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-log\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.272148 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/824497ea-421f-4928-83bd-908240595a4f-var-lib\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.276204 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60184b1a-f656-4b71-bf13-2953f715bc12-scripts\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.278078 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-combined-ca-bundle\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.291731 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/60184b1a-f656-4b71-bf13-2953f715bc12-ovn-controller-tls-certs\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.295252 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6js8\" (UniqueName: \"kubernetes.io/projected/824497ea-421f-4928-83bd-908240595a4f-kube-api-access-l6js8\") pod \"ovn-controller-ovs-6jn8j\" (UID: \"824497ea-421f-4928-83bd-908240595a4f\") " pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.300208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2pvr\" (UniqueName: \"kubernetes.io/projected/60184b1a-f656-4b71-bf13-2953f715bc12-kube-api-access-d2pvr\") pod \"ovn-controller-sjsvm\" (UID: \"60184b1a-f656-4b71-bf13-2953f715bc12\") " pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.323066 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.398030 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.852917 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.855626 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.858622 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.860663 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.860927 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-526wp" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.861092 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.861252 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.861364 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988595 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988695 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988744 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988780 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988808 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988846 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-config\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988914 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgc2\" (UniqueName: \"kubernetes.io/projected/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-kube-api-access-thgc2\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:25 crc kubenswrapper[4827]: I0126 09:23:25.988969 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090379 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090476 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090523 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090543 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090569 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-config\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090663 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgc2\" (UniqueName: \"kubernetes.io/projected/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-kube-api-access-thgc2\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.090696 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.091903 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.092718 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-config\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.093097 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.093410 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.101349 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.112250 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgc2\" (UniqueName: \"kubernetes.io/projected/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-kube-api-access-thgc2\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.114100 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.116446 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.118918 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e\") " pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:26 crc kubenswrapper[4827]: I0126 09:23:26.174816 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.413074 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.414837 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.429793 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.429930 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.430062 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p5wnq" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.430138 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.446282 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.560958 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561013 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwms\" (UniqueName: \"kubernetes.io/projected/f55a507a-514c-48de-a8e8-8a3ef3eef284-kube-api-access-kpwms\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-config\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561404 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561507 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.561525 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663509 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663573 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663610 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663664 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpwms\" (UniqueName: \"kubernetes.io/projected/f55a507a-514c-48de-a8e8-8a3ef3eef284-kube-api-access-kpwms\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.663742 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-config\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.664601 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-config\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.665496 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.666191 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.666681 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f55a507a-514c-48de-a8e8-8a3ef3eef284-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.670222 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.679419 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.680213 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55a507a-514c-48de-a8e8-8a3ef3eef284-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.686305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpwms\" (UniqueName: \"kubernetes.io/projected/f55a507a-514c-48de-a8e8-8a3ef3eef284-kube-api-access-kpwms\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.687929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f55a507a-514c-48de-a8e8-8a3ef3eef284\") " pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:29 crc kubenswrapper[4827]: I0126 09:23:29.746720 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.076076 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.076481 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvmk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-bq574_openstack(36492635-cf3c-4bb4-9d2b-e9584899ec03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.077842 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-bq574" podUID="36492635-cf3c-4bb4-9d2b-e9584899ec03" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.106975 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.107412 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tsk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-hnklz_openstack(24a78cb5-f4d7-496b-865b-925dbceecc11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:23:32 crc kubenswrapper[4827]: E0126 09:23:32.108680 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" podUID="24a78cb5-f4d7-496b-865b-925dbceecc11" Jan 26 09:23:32 crc kubenswrapper[4827]: I0126 09:23:32.589344 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.318207 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.424257 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc\") pod \"36492635-cf3c-4bb4-9d2b-e9584899ec03\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.424324 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmk8\" (UniqueName: \"kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8\") pod \"36492635-cf3c-4bb4-9d2b-e9584899ec03\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.424465 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config\") pod \"36492635-cf3c-4bb4-9d2b-e9584899ec03\" (UID: \"36492635-cf3c-4bb4-9d2b-e9584899ec03\") " Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.426321 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config" (OuterVolumeSpecName: "config") pod "36492635-cf3c-4bb4-9d2b-e9584899ec03" (UID: "36492635-cf3c-4bb4-9d2b-e9584899ec03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.427417 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36492635-cf3c-4bb4-9d2b-e9584899ec03" (UID: "36492635-cf3c-4bb4-9d2b-e9584899ec03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.431859 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8" (OuterVolumeSpecName: "kube-api-access-nvmk8") pod "36492635-cf3c-4bb4-9d2b-e9584899ec03" (UID: "36492635-cf3c-4bb4-9d2b-e9584899ec03"). InnerVolumeSpecName "kube-api-access-nvmk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.526105 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.526127 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36492635-cf3c-4bb4-9d2b-e9584899ec03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.526137 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmk8\" (UniqueName: \"kubernetes.io/projected/36492635-cf3c-4bb4-9d2b-e9584899ec03-kube-api-access-nvmk8\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.553836 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.627561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config\") pod \"24a78cb5-f4d7-496b-865b-925dbceecc11\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.627597 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tsk8\" (UniqueName: \"kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8\") pod \"24a78cb5-f4d7-496b-865b-925dbceecc11\" (UID: \"24a78cb5-f4d7-496b-865b-925dbceecc11\") " Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.628056 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config" (OuterVolumeSpecName: "config") pod "24a78cb5-f4d7-496b-865b-925dbceecc11" (UID: "24a78cb5-f4d7-496b-865b-925dbceecc11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.630027 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a78cb5-f4d7-496b-865b-925dbceecc11-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.642906 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8" (OuterVolumeSpecName: "kube-api-access-9tsk8") pod "24a78cb5-f4d7-496b-865b-925dbceecc11" (UID: "24a78cb5-f4d7-496b-865b-925dbceecc11"). InnerVolumeSpecName "kube-api-access-9tsk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.731316 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tsk8\" (UniqueName: \"kubernetes.io/projected/24a78cb5-f4d7-496b-865b-925dbceecc11-kube-api-access-9tsk8\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.741562 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:23:33 crc kubenswrapper[4827]: I0126 09:23:33.876908 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.010753 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.021319 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sjsvm"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.109803 4827 generic.go:334] "Generic (PLEG): container finished" podID="c785dedf-5109-4e22-be36-b04a971c38e0" containerID="bf50f9d76288974488855c1fb16667becb32f72d49f81db12ad465527748b020" exitCode=0 Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.110007 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" event={"ID":"c785dedf-5109-4e22-be36-b04a971c38e0","Type":"ContainerDied","Data":"bf50f9d76288974488855c1fb16667becb32f72d49f81db12ad465527748b020"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.112587 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b1cad67f-3855-4463-980d-5372c7185eef","Type":"ContainerStarted","Data":"7cbbc69deea7d7ca35a4e9937baad10d9f1a84a21a0b99e450785298a797a291"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.121714 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.122870 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-hnklz" event={"ID":"24a78cb5-f4d7-496b-865b-925dbceecc11","Type":"ContainerDied","Data":"4dcf2778f378106a831387c289407d59c49d0ab013d829cf7f16a1b4cc45ad76"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.126992 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05543fb3-7874-4393-a8da-c3f6f7e65029","Type":"ContainerStarted","Data":"3e19f7281021121ed84918f0d88d5e1efbdc232a6ded704be050ae3f41468966"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.134621 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-bq574" Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.134665 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-bq574" event={"ID":"36492635-cf3c-4bb4-9d2b-e9584899ec03","Type":"ContainerDied","Data":"df9f29838654cc18d086dbcb5a30c249166f24be0189c6102da6004656479d96"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.136994 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3f89d129-88aa-4c87-ac49-33e52bd1cd4c","Type":"ContainerStarted","Data":"1c380f9bf7bcda85a91e8035e617ac822e3103c2513eeb74d86190c89a64fcaa"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.142254 4827 generic.go:334] "Generic (PLEG): container finished" podID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerID="f968e941c2a00a3378d28117a10f8cec2b552a89b9ba969d25a3633e4d503d2e" exitCode=0 Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.142294 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" event={"ID":"59909240-fde2-4bdd-b0f4-d02985da5fc2","Type":"ContainerDied","Data":"f968e941c2a00a3378d28117a10f8cec2b552a89b9ba969d25a3633e4d503d2e"} Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.226537 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.235348 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hnklz"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.279258 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.293297 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-bq574"] Jan 26 09:23:34 crc kubenswrapper[4827]: I0126 09:23:34.391428 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6jn8j"] Jan 26 09:23:34 crc kubenswrapper[4827]: W0126 09:23:34.411685 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod824497ea_421f_4928_83bd_908240595a4f.slice/crio-9dfd5e86a9c499e0dbcee478f28c08aa900d1975b570dd0eb24d75f71b53dd03 WatchSource:0}: Error finding container 9dfd5e86a9c499e0dbcee478f28c08aa900d1975b570dd0eb24d75f71b53dd03: Status 404 returned error can't find the container with id 9dfd5e86a9c499e0dbcee478f28c08aa900d1975b570dd0eb24d75f71b53dd03 Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.154434 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" event={"ID":"59909240-fde2-4bdd-b0f4-d02985da5fc2","Type":"ContainerStarted","Data":"65551fbe776a4d1ec0bf5667e3e6ba6de5941a1a70a2d8c77f767d5cb4dadd1c"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.155632 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.157946 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"da6ed528-8ee6-421d-a921-a9b6d1382d45","Type":"ContainerStarted","Data":"a0b89ed1ea7a5e1bb86c2593861fead55faf26d28eb12c99ecec2841fa99911a"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.163348 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerStarted","Data":"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.174422 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" event={"ID":"c785dedf-5109-4e22-be36-b04a971c38e0","Type":"ContainerStarted","Data":"a369dc7bd24b4a9bcfbff13b419847ec275d36d21179748465e7de99a7371919"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.175220 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.176107 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6jn8j" event={"ID":"824497ea-421f-4928-83bd-908240595a4f","Type":"ContainerStarted","Data":"9dfd5e86a9c499e0dbcee478f28c08aa900d1975b570dd0eb24d75f71b53dd03"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.177676 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerStarted","Data":"101e5bc11ac55692fe30de69f45ef6185d49a52aaaf7e4c37c5ebe506a0a1297"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.179209 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm" event={"ID":"60184b1a-f656-4b71-bf13-2953f715bc12","Type":"ContainerStarted","Data":"89e766bdcc14e3aaa6d81cb114447977447ed5a8a0e1f4c6d0dc2e2a8a5e87b4"} Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.196277 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" podStartSLOduration=3.357039656 podStartE2EDuration="20.196259556s" podCreationTimestamp="2026-01-26 09:23:15 +0000 UTC" firstStartedPulling="2026-01-26 09:23:16.679202628 +0000 UTC m=+1025.327874447" lastFinishedPulling="2026-01-26 09:23:33.518422538 +0000 UTC m=+1042.167094347" observedRunningTime="2026-01-26 09:23:35.172651247 +0000 UTC m=+1043.821323067" watchObservedRunningTime="2026-01-26 09:23:35.196259556 +0000 UTC m=+1043.844931375" Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.252723 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" podStartSLOduration=3.732987676 podStartE2EDuration="20.252703934s" podCreationTimestamp="2026-01-26 09:23:15 +0000 UTC" firstStartedPulling="2026-01-26 09:23:16.910315598 +0000 UTC m=+1025.558987417" lastFinishedPulling="2026-01-26 09:23:33.430031856 +0000 UTC m=+1042.078703675" observedRunningTime="2026-01-26 09:23:35.244939714 +0000 UTC m=+1043.893611533" watchObservedRunningTime="2026-01-26 09:23:35.252703934 +0000 UTC m=+1043.901375753" Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.297476 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.387551 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.712222 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a78cb5-f4d7-496b-865b-925dbceecc11" path="/var/lib/kubelet/pods/24a78cb5-f4d7-496b-865b-925dbceecc11/volumes" Jan 26 09:23:35 crc kubenswrapper[4827]: I0126 09:23:35.712666 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36492635-cf3c-4bb4-9d2b-e9584899ec03" path="/var/lib/kubelet/pods/36492635-cf3c-4bb4-9d2b-e9584899ec03/volumes" Jan 26 09:23:35 crc kubenswrapper[4827]: W0126 09:23:35.800467 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3e64751_5ea0_49b6_b93f_5c9ac2b5c58e.slice/crio-729257c0a5bcf49008d8875a9896a4f37c293b302fb3863f565918aa8631b6d5 WatchSource:0}: Error finding container 729257c0a5bcf49008d8875a9896a4f37c293b302fb3863f565918aa8631b6d5: Status 404 returned error can't find the container with id 729257c0a5bcf49008d8875a9896a4f37c293b302fb3863f565918aa8631b6d5 Jan 26 09:23:36 crc kubenswrapper[4827]: I0126 09:23:36.190082 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f55a507a-514c-48de-a8e8-8a3ef3eef284","Type":"ContainerStarted","Data":"1df383e7905fba7c287e33591fb0316180beec91ef81b8b54d6a39252c62f8d7"} Jan 26 09:23:36 crc kubenswrapper[4827]: I0126 09:23:36.191948 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e","Type":"ContainerStarted","Data":"729257c0a5bcf49008d8875a9896a4f37c293b302fb3863f565918aa8631b6d5"} Jan 26 09:23:41 crc kubenswrapper[4827]: I0126 09:23:41.153783 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:41 crc kubenswrapper[4827]: I0126 09:23:41.320849 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:23:41 crc kubenswrapper[4827]: I0126 09:23:41.381452 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:41 crc kubenswrapper[4827]: I0126 09:23:41.381894 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="dnsmasq-dns" containerID="cri-o://65551fbe776a4d1ec0bf5667e3e6ba6de5941a1a70a2d8c77f767d5cb4dadd1c" gracePeriod=10 Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.292915 4827 generic.go:334] "Generic (PLEG): container finished" podID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerID="65551fbe776a4d1ec0bf5667e3e6ba6de5941a1a70a2d8c77f767d5cb4dadd1c" exitCode=0 Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.293182 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" event={"ID":"59909240-fde2-4bdd-b0f4-d02985da5fc2","Type":"ContainerDied","Data":"65551fbe776a4d1ec0bf5667e3e6ba6de5941a1a70a2d8c77f767d5cb4dadd1c"} Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.517140 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.699767 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config\") pod \"59909240-fde2-4bdd-b0f4-d02985da5fc2\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.699813 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsmdn\" (UniqueName: \"kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn\") pod \"59909240-fde2-4bdd-b0f4-d02985da5fc2\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.699842 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc\") pod \"59909240-fde2-4bdd-b0f4-d02985da5fc2\" (UID: \"59909240-fde2-4bdd-b0f4-d02985da5fc2\") " Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.704919 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn" (OuterVolumeSpecName: "kube-api-access-lsmdn") pod "59909240-fde2-4bdd-b0f4-d02985da5fc2" (UID: "59909240-fde2-4bdd-b0f4-d02985da5fc2"). InnerVolumeSpecName "kube-api-access-lsmdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.744382 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config" (OuterVolumeSpecName: "config") pod "59909240-fde2-4bdd-b0f4-d02985da5fc2" (UID: "59909240-fde2-4bdd-b0f4-d02985da5fc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.749940 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59909240-fde2-4bdd-b0f4-d02985da5fc2" (UID: "59909240-fde2-4bdd-b0f4-d02985da5fc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.801597 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.801722 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsmdn\" (UniqueName: \"kubernetes.io/projected/59909240-fde2-4bdd-b0f4-d02985da5fc2-kube-api-access-lsmdn\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:42 crc kubenswrapper[4827]: I0126 09:23:42.801773 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59909240-fde2-4bdd-b0f4-d02985da5fc2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.301396 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" event={"ID":"59909240-fde2-4bdd-b0f4-d02985da5fc2","Type":"ContainerDied","Data":"3528ce749967c1019bfadcb271ea581258d4218019440d5bfee69066c3efe745"} Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.301455 4827 scope.go:117] "RemoveContainer" containerID="65551fbe776a4d1ec0bf5667e3e6ba6de5941a1a70a2d8c77f767d5cb4dadd1c" Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.301591 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sz2tk" Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.343510 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.351856 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sz2tk"] Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.712737 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" path="/var/lib/kubelet/pods/59909240-fde2-4bdd-b0f4-d02985da5fc2/volumes" Jan 26 09:23:43 crc kubenswrapper[4827]: I0126 09:23:43.920216 4827 scope.go:117] "RemoveContainer" containerID="f968e941c2a00a3378d28117a10f8cec2b552a89b9ba969d25a3633e4d503d2e" Jan 26 09:23:44 crc kubenswrapper[4827]: I0126 09:23:44.310492 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3f89d129-88aa-4c87-ac49-33e52bd1cd4c","Type":"ContainerStarted","Data":"1be5e73fbe251282f3feaf5f12bd1190a1f260ac51db548e5a5472fae3362e7a"} Jan 26 09:23:44 crc kubenswrapper[4827]: I0126 09:23:44.317763 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05543fb3-7874-4393-a8da-c3f6f7e65029","Type":"ContainerStarted","Data":"192ff5c8aef6a340f8ec3a7bd2ac54de09cb4ed39e95776068a4dca13bc78ef6"} Jan 26 09:23:44 crc kubenswrapper[4827]: I0126 09:23:44.318098 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 09:23:44 crc kubenswrapper[4827]: I0126 09:23:44.355917 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.198599881 podStartE2EDuration="22.355897597s" podCreationTimestamp="2026-01-26 09:23:22 +0000 UTC" firstStartedPulling="2026-01-26 09:23:33.763668363 +0000 UTC m=+1042.412340192" lastFinishedPulling="2026-01-26 09:23:43.920966099 +0000 UTC m=+1052.569637908" observedRunningTime="2026-01-26 09:23:44.350308006 +0000 UTC m=+1052.998979825" watchObservedRunningTime="2026-01-26 09:23:44.355897597 +0000 UTC m=+1053.004569416" Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.325348 4827 generic.go:334] "Generic (PLEG): container finished" podID="824497ea-421f-4928-83bd-908240595a4f" containerID="1d5065227acbc11ba55ff009202903e09fccc4f974f12818256f51b8c22f5dc5" exitCode=0 Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.325484 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6jn8j" event={"ID":"824497ea-421f-4928-83bd-908240595a4f","Type":"ContainerDied","Data":"1d5065227acbc11ba55ff009202903e09fccc4f974f12818256f51b8c22f5dc5"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.329426 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e","Type":"ContainerStarted","Data":"85669b0fa4a823c86f1b534e8455a0f150788ba68a89de418aaebc3dcc331c5f"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.331053 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm" event={"ID":"60184b1a-f656-4b71-bf13-2953f715bc12","Type":"ContainerStarted","Data":"53d422c3506e56257395ab97605fd70b41f4aebcac98abf678bfe4a2bf13efb7"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.331602 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sjsvm" Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.333169 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"da6ed528-8ee6-421d-a921-a9b6d1382d45","Type":"ContainerStarted","Data":"5d6e13a971f310e58a8a39112a8569c7e167031a61de6b1b3e26d7005a7e3b6a"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.333760 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.335231 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b1cad67f-3855-4463-980d-5372c7185eef","Type":"ContainerStarted","Data":"c57d4c0a24b40099cc70585e547d509ae2a3e78f247c8fa5aa8b18d0102a5726"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.342810 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f55a507a-514c-48de-a8e8-8a3ef3eef284","Type":"ContainerStarted","Data":"f3af4f53cf403a53fb9e8a1c00e035f4d550894159a001984c3c88c2a482d957"} Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.387808 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.549969596 podStartE2EDuration="26.387774957s" podCreationTimestamp="2026-01-26 09:23:19 +0000 UTC" firstStartedPulling="2026-01-26 09:23:34.119212034 +0000 UTC m=+1042.767883853" lastFinishedPulling="2026-01-26 09:23:43.957017385 +0000 UTC m=+1052.605689214" observedRunningTime="2026-01-26 09:23:45.381153989 +0000 UTC m=+1054.029825818" watchObservedRunningTime="2026-01-26 09:23:45.387774957 +0000 UTC m=+1054.036446766" Jan 26 09:23:45 crc kubenswrapper[4827]: I0126 09:23:45.388207 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sjsvm" podStartSLOduration=11.560195475 podStartE2EDuration="21.38819828s" podCreationTimestamp="2026-01-26 09:23:24 +0000 UTC" firstStartedPulling="2026-01-26 09:23:34.118838734 +0000 UTC m=+1042.767510563" lastFinishedPulling="2026-01-26 09:23:43.946841549 +0000 UTC m=+1052.595513368" observedRunningTime="2026-01-26 09:23:45.366111802 +0000 UTC m=+1054.014783631" watchObservedRunningTime="2026-01-26 09:23:45.38819828 +0000 UTC m=+1054.036870099" Jan 26 09:23:48 crc kubenswrapper[4827]: I0126 09:23:48.367526 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6jn8j" event={"ID":"824497ea-421f-4928-83bd-908240595a4f","Type":"ContainerStarted","Data":"62c9fad95c663ccbb2cc9fd3c3d51793d6c119615df1bccd74bab1a7552f9429"} Jan 26 09:23:48 crc kubenswrapper[4827]: I0126 09:23:48.368089 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6jn8j" event={"ID":"824497ea-421f-4928-83bd-908240595a4f","Type":"ContainerStarted","Data":"5cacae8378d4246fdab80b4edadf4c09d60320dfa8cb037f16d4f7a8cfe5349e"} Jan 26 09:23:48 crc kubenswrapper[4827]: I0126 09:23:48.391512 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6jn8j" podStartSLOduration=13.884517233 podStartE2EDuration="23.391495383s" podCreationTimestamp="2026-01-26 09:23:25 +0000 UTC" firstStartedPulling="2026-01-26 09:23:34.413965289 +0000 UTC m=+1043.062637108" lastFinishedPulling="2026-01-26 09:23:43.920943439 +0000 UTC m=+1052.569615258" observedRunningTime="2026-01-26 09:23:48.38772871 +0000 UTC m=+1057.036400539" watchObservedRunningTime="2026-01-26 09:23:48.391495383 +0000 UTC m=+1057.040167202" Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.377303 4827 generic.go:334] "Generic (PLEG): container finished" podID="3f89d129-88aa-4c87-ac49-33e52bd1cd4c" containerID="1be5e73fbe251282f3feaf5f12bd1190a1f260ac51db548e5a5472fae3362e7a" exitCode=0 Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.377377 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3f89d129-88aa-4c87-ac49-33e52bd1cd4c","Type":"ContainerDied","Data":"1be5e73fbe251282f3feaf5f12bd1190a1f260ac51db548e5a5472fae3362e7a"} Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.379677 4827 generic.go:334] "Generic (PLEG): container finished" podID="b1cad67f-3855-4463-980d-5372c7185eef" containerID="c57d4c0a24b40099cc70585e547d509ae2a3e78f247c8fa5aa8b18d0102a5726" exitCode=0 Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.379719 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b1cad67f-3855-4463-980d-5372c7185eef","Type":"ContainerDied","Data":"c57d4c0a24b40099cc70585e547d509ae2a3e78f247c8fa5aa8b18d0102a5726"} Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.380205 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:49 crc kubenswrapper[4827]: I0126 09:23:49.380268 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.395420 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3f89d129-88aa-4c87-ac49-33e52bd1cd4c","Type":"ContainerStarted","Data":"c88007d1a123d94afe6fbb5e26acdd27a2c6c66c740a4630489847849d6345c9"} Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.400923 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b1cad67f-3855-4463-980d-5372c7185eef","Type":"ContainerStarted","Data":"786fda0d1f652250550a6d1439a72386d785350cc30fdb9b4d6dbe732775061a"} Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.405876 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f55a507a-514c-48de-a8e8-8a3ef3eef284","Type":"ContainerStarted","Data":"61bad285a7e41fdb473ca80aa8d0d381d90a870adf81ec9c862e02e619ec5249"} Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.407534 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.413576 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e","Type":"ContainerStarted","Data":"b4d2f5cb210cd8e10c1fd5566d4e90379edfd47bd6c0d61fa69ab7b683926a6b"} Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.443614 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.397249905 podStartE2EDuration="33.443593168s" podCreationTimestamp="2026-01-26 09:23:17 +0000 UTC" firstStartedPulling="2026-01-26 09:23:33.900530147 +0000 UTC m=+1042.549201966" lastFinishedPulling="2026-01-26 09:23:43.94687341 +0000 UTC m=+1052.595545229" observedRunningTime="2026-01-26 09:23:50.43699346 +0000 UTC m=+1059.085665319" watchObservedRunningTime="2026-01-26 09:23:50.443593168 +0000 UTC m=+1059.092264997" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.483506 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.75973595 podStartE2EDuration="26.483480067s" podCreationTimestamp="2026-01-26 09:23:24 +0000 UTC" firstStartedPulling="2026-01-26 09:23:35.803707412 +0000 UTC m=+1044.452379231" lastFinishedPulling="2026-01-26 09:23:49.527451509 +0000 UTC m=+1058.176123348" observedRunningTime="2026-01-26 09:23:50.473493997 +0000 UTC m=+1059.122165846" watchObservedRunningTime="2026-01-26 09:23:50.483480067 +0000 UTC m=+1059.132151926" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.510551 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.346057306 podStartE2EDuration="22.510524849s" podCreationTimestamp="2026-01-26 09:23:28 +0000 UTC" firstStartedPulling="2026-01-26 09:23:35.364064366 +0000 UTC m=+1044.012736185" lastFinishedPulling="2026-01-26 09:23:49.528531899 +0000 UTC m=+1058.177203728" observedRunningTime="2026-01-26 09:23:50.499147041 +0000 UTC m=+1059.147818870" watchObservedRunningTime="2026-01-26 09:23:50.510524849 +0000 UTC m=+1059.159196688" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.554607 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.903232278 podStartE2EDuration="32.554586071s" podCreationTimestamp="2026-01-26 09:23:18 +0000 UTC" firstStartedPulling="2026-01-26 09:23:33.318489488 +0000 UTC m=+1041.967161307" lastFinishedPulling="2026-01-26 09:23:43.969843281 +0000 UTC m=+1052.618515100" observedRunningTime="2026-01-26 09:23:50.54716656 +0000 UTC m=+1059.195838389" watchObservedRunningTime="2026-01-26 09:23:50.554586071 +0000 UTC m=+1059.203257900" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.747598 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:50 crc kubenswrapper[4827]: I0126 09:23:50.781897 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.175482 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.420487 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.463074 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.761106 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:51 crc kubenswrapper[4827]: E0126 09:23:51.761754 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="init" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.761772 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="init" Jan 26 09:23:51 crc kubenswrapper[4827]: E0126 09:23:51.761796 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="dnsmasq-dns" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.761802 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="dnsmasq-dns" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.761963 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="59909240-fde2-4bdd-b0f4-d02985da5fc2" containerName="dnsmasq-dns" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.762866 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.768668 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.775931 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.865760 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.865921 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.866157 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skrh\" (UniqueName: \"kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.866335 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.881191 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9dxlf"] Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.882090 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.885736 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.934080 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9dxlf"] Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969244 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovs-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skrh\" (UniqueName: \"kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969417 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovn-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969459 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff255bf-dcef-4418-ac66-802299400786-config\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969477 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969501 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-combined-ca-bundle\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969523 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.969544 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rn2w\" (UniqueName: \"kubernetes.io/projected/0ff255bf-dcef-4418-ac66-802299400786-kube-api-access-6rn2w\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.970410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.970767 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.971444 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:51 crc kubenswrapper[4827]: I0126 09:23:51.988906 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skrh\" (UniqueName: \"kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh\") pod \"dnsmasq-dns-794868bd45-7gpkw\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.070976 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovn-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071056 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff255bf-dcef-4418-ac66-802299400786-config\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071097 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-combined-ca-bundle\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071122 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071146 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rn2w\" (UniqueName: \"kubernetes.io/projected/0ff255bf-dcef-4418-ac66-802299400786-kube-api-access-6rn2w\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071196 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovs-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071534 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovs-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.071593 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ff255bf-dcef-4418-ac66-802299400786-ovn-rundir\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.072242 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff255bf-dcef-4418-ac66-802299400786-config\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.076575 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-combined-ca-bundle\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.078969 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.079470 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.087186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff255bf-dcef-4418-ac66-802299400786-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.105239 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rn2w\" (UniqueName: \"kubernetes.io/projected/0ff255bf-dcef-4418-ac66-802299400786-kube-api-access-6rn2w\") pod \"ovn-controller-metrics-9dxlf\" (UID: \"0ff255bf-dcef-4418-ac66-802299400786\") " pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.133591 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.134872 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.146832 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.151820 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.198805 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9dxlf" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.277341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vbq\" (UniqueName: \"kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.277397 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.277415 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.277451 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.277600 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.373052 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.382565 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.382696 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vbq\" (UniqueName: \"kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.382741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.382759 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.382783 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.383549 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.384121 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.384407 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.388422 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.418378 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vbq\" (UniqueName: \"kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq\") pod \"dnsmasq-dns-757dc6fff9-bp7wz\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.508564 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.664394 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.731822 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:23:52 crc kubenswrapper[4827]: I0126 09:23:52.740958 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9dxlf"] Jan 26 09:23:52 crc kubenswrapper[4827]: W0126 09:23:52.742709 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff255bf_dcef_4418_ac66_802299400786.slice/crio-62262e7cbeb59939d499804ac7a364f8154823e074a86e8b980dcbea09be4910 WatchSource:0}: Error finding container 62262e7cbeb59939d499804ac7a364f8154823e074a86e8b980dcbea09be4910: Status 404 returned error can't find the container with id 62262e7cbeb59939d499804ac7a364f8154823e074a86e8b980dcbea09be4910 Jan 26 09:23:52 crc kubenswrapper[4827]: W0126 09:23:52.747492 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9cc942_f402_4e73_b974_c61b05650876.slice/crio-82526a99f51e1520ec28a7847b469f413164dc963faaa66c493d33de1f16f7a3 WatchSource:0}: Error finding container 82526a99f51e1520ec28a7847b469f413164dc963faaa66c493d33de1f16f7a3: Status 404 returned error can't find the container with id 82526a99f51e1520ec28a7847b469f413164dc963faaa66c493d33de1f16f7a3 Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.175456 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.210099 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.442370 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9dxlf" event={"ID":"0ff255bf-dcef-4418-ac66-802299400786","Type":"ContainerStarted","Data":"38e6d44aef246165c97caf5772798e66f13b624edb41a83323497356cca530bb"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.442419 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9dxlf" event={"ID":"0ff255bf-dcef-4418-ac66-802299400786","Type":"ContainerStarted","Data":"62262e7cbeb59939d499804ac7a364f8154823e074a86e8b980dcbea09be4910"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.444007 4827 generic.go:334] "Generic (PLEG): container finished" podID="4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" containerID="96c33ecc6e6f743160a7e72bbb81c38ceafb8a4d92b445e5c7d639db74c6e2f7" exitCode=0 Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.444102 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" event={"ID":"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98","Type":"ContainerDied","Data":"96c33ecc6e6f743160a7e72bbb81c38ceafb8a4d92b445e5c7d639db74c6e2f7"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.444189 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" event={"ID":"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98","Type":"ContainerStarted","Data":"3be495594b19e546f76fd723717947db342a55c9597bbe82246cf238a114847f"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.447690 4827 generic.go:334] "Generic (PLEG): container finished" podID="5f9cc942-f402-4e73-b974-c61b05650876" containerID="60cdb241ec594632ef112239e486eb652c0ad1f9bb7ba6b168fb2c96357c3cd3" exitCode=0 Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.448936 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" event={"ID":"5f9cc942-f402-4e73-b974-c61b05650876","Type":"ContainerDied","Data":"60cdb241ec594632ef112239e486eb652c0ad1f9bb7ba6b168fb2c96357c3cd3"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.449004 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" event={"ID":"5f9cc942-f402-4e73-b974-c61b05650876","Type":"ContainerStarted","Data":"82526a99f51e1520ec28a7847b469f413164dc963faaa66c493d33de1f16f7a3"} Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.500913 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9dxlf" podStartSLOduration=2.500889893 podStartE2EDuration="2.500889893s" podCreationTimestamp="2026-01-26 09:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:23:53.46901374 +0000 UTC m=+1062.117685579" watchObservedRunningTime="2026-01-26 09:23:53.500889893 +0000 UTC m=+1062.149561722" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.584160 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.832414 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.838239 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.845001 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.845169 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.845292 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.845431 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kphk5" Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.846057 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 09:23:53 crc kubenswrapper[4827]: I0126 09:23:53.991973 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019078 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc\") pod \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019230 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config\") pod \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019295 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skrh\" (UniqueName: \"kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh\") pod \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019319 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb\") pod \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\" (UID: \"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98\") " Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019529 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-config\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019571 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019612 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019664 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-scripts\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019687 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmh7t\" (UniqueName: \"kubernetes.io/projected/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-kube-api-access-lmh7t\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.019711 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.055467 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh" (OuterVolumeSpecName: "kube-api-access-4skrh") pod "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" (UID: "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98"). InnerVolumeSpecName "kube-api-access-4skrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.112172 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config" (OuterVolumeSpecName: "config") pod "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" (UID: "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.123795 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.123891 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-config\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.123915 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.123935 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.123977 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.124008 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-scripts\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.124037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmh7t\" (UniqueName: \"kubernetes.io/projected/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-kube-api-access-lmh7t\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.124098 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.124113 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skrh\" (UniqueName: \"kubernetes.io/projected/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-kube-api-access-4skrh\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.126406 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" (UID: "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.127045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.127778 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-scripts\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.128358 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-config\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.140328 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.154220 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.157313 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" (UID: "4ec490d3-01a4-4f0a-a6fb-368e0fcefa98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.167528 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.174146 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmh7t\" (UniqueName: \"kubernetes.io/projected/b4dd74b4-df0b-414c-ba61-5d428eb2f33e-kube-api-access-lmh7t\") pod \"ovn-northd-0\" (UID: \"b4dd74b4-df0b-414c-ba61-5d428eb2f33e\") " pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.183381 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.225608 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.225660 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.459081 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" event={"ID":"4ec490d3-01a4-4f0a-a6fb-368e0fcefa98","Type":"ContainerDied","Data":"3be495594b19e546f76fd723717947db342a55c9597bbe82246cf238a114847f"} Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.459145 4827 scope.go:117] "RemoveContainer" containerID="96c33ecc6e6f743160a7e72bbb81c38ceafb8a4d92b445e5c7d639db74c6e2f7" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.459319 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-7gpkw" Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.507631 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.515318 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-7gpkw"] Jan 26 09:23:54 crc kubenswrapper[4827]: I0126 09:23:54.668620 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 09:23:55 crc kubenswrapper[4827]: I0126 09:23:55.468816 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b4dd74b4-df0b-414c-ba61-5d428eb2f33e","Type":"ContainerStarted","Data":"6c8ff4a831e430afd141a620c4b5b1234356a62d5d7d587516be0d8355a6904b"} Jan 26 09:23:55 crc kubenswrapper[4827]: I0126 09:23:55.711587 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" path="/var/lib/kubelet/pods/4ec490d3-01a4-4f0a-a6fb-368e0fcefa98/volumes" Jan 26 09:23:56 crc kubenswrapper[4827]: I0126 09:23:56.482536 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" event={"ID":"5f9cc942-f402-4e73-b974-c61b05650876","Type":"ContainerStarted","Data":"a94be0fa5f974286a7c44481a06d9a2e3ea587f1e40c5605d4f4341db4aea0a1"} Jan 26 09:23:56 crc kubenswrapper[4827]: I0126 09:23:56.482880 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:23:56 crc kubenswrapper[4827]: I0126 09:23:56.501328 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podStartSLOduration=4.501307408 podStartE2EDuration="4.501307408s" podCreationTimestamp="2026-01-26 09:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:23:56.498448621 +0000 UTC m=+1065.147120450" watchObservedRunningTime="2026-01-26 09:23:56.501307408 +0000 UTC m=+1065.149979237" Jan 26 09:23:57 crc kubenswrapper[4827]: I0126 09:23:57.494926 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b4dd74b4-df0b-414c-ba61-5d428eb2f33e","Type":"ContainerStarted","Data":"56e0cda29312dbb733cac0094da3cc0f6e553055b3f2547580ece34f297433cc"} Jan 26 09:23:57 crc kubenswrapper[4827]: I0126 09:23:57.496916 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b4dd74b4-df0b-414c-ba61-5d428eb2f33e","Type":"ContainerStarted","Data":"915c561ed8180062585647dc6015004017e9042d7f406096a1569a9052e268c7"} Jan 26 09:23:57 crc kubenswrapper[4827]: I0126 09:23:57.531856 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.33615771 podStartE2EDuration="4.531827921s" podCreationTimestamp="2026-01-26 09:23:53 +0000 UTC" firstStartedPulling="2026-01-26 09:23:54.691162318 +0000 UTC m=+1063.339834147" lastFinishedPulling="2026-01-26 09:23:56.886832529 +0000 UTC m=+1065.535504358" observedRunningTime="2026-01-26 09:23:57.523526817 +0000 UTC m=+1066.172198666" watchObservedRunningTime="2026-01-26 09:23:57.531827921 +0000 UTC m=+1066.180499780" Jan 26 09:23:58 crc kubenswrapper[4827]: I0126 09:23:58.499760 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 09:23:58 crc kubenswrapper[4827]: I0126 09:23:58.682513 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 09:23:58 crc kubenswrapper[4827]: I0126 09:23:58.682679 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 09:23:58 crc kubenswrapper[4827]: I0126 09:23:58.763594 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.583091 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.916547 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-62d5-account-create-update-q6kbw"] Jan 26 09:23:59 crc kubenswrapper[4827]: E0126 09:23:59.917275 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" containerName="init" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.917299 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" containerName="init" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.917513 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec490d3-01a4-4f0a-a6fb-368e0fcefa98" containerName="init" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.918766 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.920926 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.932785 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-62d5-account-create-update-q6kbw"] Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.981117 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lwvw2"] Jan 26 09:23:59 crc kubenswrapper[4827]: I0126 09:23:59.982335 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.006880 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lwvw2"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.029560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.029628 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zbp\" (UniqueName: \"kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.068214 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.068282 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.131760 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.131820 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.131844 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jm9w\" (UniqueName: \"kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.131997 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zbp\" (UniqueName: \"kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.132610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.141923 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.151913 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zbp\" (UniqueName: \"kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp\") pod \"keystone-62d5-account-create-update-q6kbw\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.224366 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ckrtf"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.225776 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.232947 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ckrtf"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.233263 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.233974 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jm9w\" (UniqueName: \"kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.234197 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.238749 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.267903 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jm9w\" (UniqueName: \"kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w\") pod \"keystone-db-create-lwvw2\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.304869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.336366 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-be69-account-create-update-vh7s6"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.337199 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2q7\" (UniqueName: \"kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.337454 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.340714 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.344937 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.352943 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-be69-account-create-update-vh7s6"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.604203 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.604303 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.604474 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2q7\" (UniqueName: \"kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.604505 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvn4\" (UniqueName: \"kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.608384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.647012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2q7\" (UniqueName: \"kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7\") pod \"placement-db-create-ckrtf\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.709264 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.709430 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvn4\" (UniqueName: \"kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.710395 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.731342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvn4\" (UniqueName: \"kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4\") pod \"placement-be69-account-create-update-vh7s6\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.735233 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5s5pt"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.737254 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.750485 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5s5pt"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.815733 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-430e-account-create-update-c2ksl"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.816762 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.825028 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.832745 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.840845 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.857085 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-430e-account-create-update-c2ksl"] Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.913782 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg58\" (UniqueName: \"kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.914114 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxln7\" (UniqueName: \"kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.914284 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.914385 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:00 crc kubenswrapper[4827]: I0126 09:24:00.948045 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.017981 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg58\" (UniqueName: \"kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.018066 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxln7\" (UniqueName: \"kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.018206 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.018233 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.019027 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.019981 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.040459 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxln7\" (UniqueName: \"kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7\") pod \"glance-db-create-5s5pt\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.052271 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg58\" (UniqueName: \"kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58\") pod \"glance-430e-account-create-update-c2ksl\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.061585 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.110892 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lwvw2"] Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.132277 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.146389 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-62d5-account-create-update-q6kbw"] Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.361261 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ckrtf"] Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.503228 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-be69-account-create-update-vh7s6"] Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.684845 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be69-account-create-update-vh7s6" event={"ID":"e5c91b0d-9e51-4065-9eee-fde2e4971bf4","Type":"ContainerStarted","Data":"4dc79ae3d61bbf93149ae8b802b27766cea7f4fc3a8aa085c67f5936b48c02f3"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.693435 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ckrtf" event={"ID":"8b379f41-b9fc-49f0-a30e-fb5611d5f043","Type":"ContainerStarted","Data":"7966158be522860f67ba34272cf848ac0cd9680c42b2051129c6b6e1d0311cde"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.693483 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ckrtf" event={"ID":"8b379f41-b9fc-49f0-a30e-fb5611d5f043","Type":"ContainerStarted","Data":"d1c60c5ffb6c2c89b7ad49f357b2eac1968af9a96c0fed3bab69dfc474b0de95"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.701401 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-62d5-account-create-update-q6kbw" event={"ID":"4801aa52-328a-4300-bc01-6c9a5455395e","Type":"ContainerStarted","Data":"ee3aac523df6de531a19ed702d0956ba0a5ef3dab55508edbef4630686f21937"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.701441 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-62d5-account-create-update-q6kbw" event={"ID":"4801aa52-328a-4300-bc01-6c9a5455395e","Type":"ContainerStarted","Data":"594faa2426d423bd6a3f150c3f6c43df374d537fe543ea5afe3c4eb64fddd7c7"} Jan 26 09:24:01 crc kubenswrapper[4827]: W0126 09:24:01.716329 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdad4441_f327_4a21_8c0a_7f86ac1df4b4.slice/crio-f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963 WatchSource:0}: Error finding container f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963: Status 404 returned error can't find the container with id f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963 Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.716567 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lwvw2" event={"ID":"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e","Type":"ContainerStarted","Data":"c0db02b1f78d171d8cf05abaf9b3f123b22aefc2d355f80354be8960c48298be"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.716596 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lwvw2" event={"ID":"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e","Type":"ContainerStarted","Data":"ac5135609487db23b22c1470a9c76ecd64499acd671cf2028f5bfdec5f83602b"} Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.720893 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5s5pt"] Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.732279 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-ckrtf" podStartSLOduration=1.732242536 podStartE2EDuration="1.732242536s" podCreationTimestamp="2026-01-26 09:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:01.710332143 +0000 UTC m=+1070.359003962" watchObservedRunningTime="2026-01-26 09:24:01.732242536 +0000 UTC m=+1070.380914355" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.746459 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-62d5-account-create-update-q6kbw" podStartSLOduration=2.74643785 podStartE2EDuration="2.74643785s" podCreationTimestamp="2026-01-26 09:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:01.724524607 +0000 UTC m=+1070.373196436" watchObservedRunningTime="2026-01-26 09:24:01.74643785 +0000 UTC m=+1070.395109669" Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.764900 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-lwvw2" podStartSLOduration=2.764878919 podStartE2EDuration="2.764878919s" podCreationTimestamp="2026-01-26 09:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:01.741176348 +0000 UTC m=+1070.389848167" watchObservedRunningTime="2026-01-26 09:24:01.764878919 +0000 UTC m=+1070.413550738" Jan 26 09:24:01 crc kubenswrapper[4827]: W0126 09:24:01.783309 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce90226f_115c_42f6_bf51_1451a31d647c.slice/crio-96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9 WatchSource:0}: Error finding container 96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9: Status 404 returned error can't find the container with id 96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9 Jan 26 09:24:01 crc kubenswrapper[4827]: I0126 09:24:01.801071 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-430e-account-create-update-c2ksl"] Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.509913 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.605479 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.605753 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="dnsmasq-dns" containerID="cri-o://a369dc7bd24b4a9bcfbff13b419847ec275d36d21179748465e7de99a7371919" gracePeriod=10 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.737809 4827 generic.go:334] "Generic (PLEG): container finished" podID="8b379f41-b9fc-49f0-a30e-fb5611d5f043" containerID="7966158be522860f67ba34272cf848ac0cd9680c42b2051129c6b6e1d0311cde" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.737870 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ckrtf" event={"ID":"8b379f41-b9fc-49f0-a30e-fb5611d5f043","Type":"ContainerDied","Data":"7966158be522860f67ba34272cf848ac0cd9680c42b2051129c6b6e1d0311cde"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.744545 4827 generic.go:334] "Generic (PLEG): container finished" podID="4801aa52-328a-4300-bc01-6c9a5455395e" containerID="ee3aac523df6de531a19ed702d0956ba0a5ef3dab55508edbef4630686f21937" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.744610 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-62d5-account-create-update-q6kbw" event={"ID":"4801aa52-328a-4300-bc01-6c9a5455395e","Type":"ContainerDied","Data":"ee3aac523df6de531a19ed702d0956ba0a5ef3dab55508edbef4630686f21937"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.746332 4827 generic.go:334] "Generic (PLEG): container finished" podID="cdad4441-f327-4a21-8c0a-7f86ac1df4b4" containerID="6cd014ea888aa55f66db481e5e74a96c2266b93ec93345ba81e0a5026b8298d1" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.746447 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5s5pt" event={"ID":"cdad4441-f327-4a21-8c0a-7f86ac1df4b4","Type":"ContainerDied","Data":"6cd014ea888aa55f66db481e5e74a96c2266b93ec93345ba81e0a5026b8298d1"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.746518 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5s5pt" event={"ID":"cdad4441-f327-4a21-8c0a-7f86ac1df4b4","Type":"ContainerStarted","Data":"f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.747682 4827 generic.go:334] "Generic (PLEG): container finished" podID="ce90226f-115c-42f6-bf51-1451a31d647c" containerID="877cdcd7c56b635a01a233b79b110bb95509a7f12ad63fc5139afd8a76dadba1" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.747770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-430e-account-create-update-c2ksl" event={"ID":"ce90226f-115c-42f6-bf51-1451a31d647c","Type":"ContainerDied","Data":"877cdcd7c56b635a01a233b79b110bb95509a7f12ad63fc5139afd8a76dadba1"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.747794 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-430e-account-create-update-c2ksl" event={"ID":"ce90226f-115c-42f6-bf51-1451a31d647c","Type":"ContainerStarted","Data":"96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.750142 4827 generic.go:334] "Generic (PLEG): container finished" podID="83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" containerID="c0db02b1f78d171d8cf05abaf9b3f123b22aefc2d355f80354be8960c48298be" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.750259 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lwvw2" event={"ID":"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e","Type":"ContainerDied","Data":"c0db02b1f78d171d8cf05abaf9b3f123b22aefc2d355f80354be8960c48298be"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.751889 4827 generic.go:334] "Generic (PLEG): container finished" podID="c785dedf-5109-4e22-be36-b04a971c38e0" containerID="a369dc7bd24b4a9bcfbff13b419847ec275d36d21179748465e7de99a7371919" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.751929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" event={"ID":"c785dedf-5109-4e22-be36-b04a971c38e0","Type":"ContainerDied","Data":"a369dc7bd24b4a9bcfbff13b419847ec275d36d21179748465e7de99a7371919"} Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.752909 4827 generic.go:334] "Generic (PLEG): container finished" podID="e5c91b0d-9e51-4065-9eee-fde2e4971bf4" containerID="6f13e964074e4583f4cb88b0008a4a0282d2850f2182ac1fd10db6b9c3d9024f" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4827]: I0126 09:24:02.752934 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be69-account-create-update-vh7s6" event={"ID":"e5c91b0d-9e51-4065-9eee-fde2e4971bf4","Type":"ContainerDied","Data":"6f13e964074e4583f4cb88b0008a4a0282d2850f2182ac1fd10db6b9c3d9024f"} Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.109389 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.273549 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmg7r\" (UniqueName: \"kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r\") pod \"c785dedf-5109-4e22-be36-b04a971c38e0\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.273704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config\") pod \"c785dedf-5109-4e22-be36-b04a971c38e0\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.273744 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc\") pod \"c785dedf-5109-4e22-be36-b04a971c38e0\" (UID: \"c785dedf-5109-4e22-be36-b04a971c38e0\") " Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.279031 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r" (OuterVolumeSpecName: "kube-api-access-vmg7r") pod "c785dedf-5109-4e22-be36-b04a971c38e0" (UID: "c785dedf-5109-4e22-be36-b04a971c38e0"). InnerVolumeSpecName "kube-api-access-vmg7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.312325 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config" (OuterVolumeSpecName: "config") pod "c785dedf-5109-4e22-be36-b04a971c38e0" (UID: "c785dedf-5109-4e22-be36-b04a971c38e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.318870 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c785dedf-5109-4e22-be36-b04a971c38e0" (UID: "c785dedf-5109-4e22-be36-b04a971c38e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.375963 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmg7r\" (UniqueName: \"kubernetes.io/projected/c785dedf-5109-4e22-be36-b04a971c38e0-kube-api-access-vmg7r\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.376201 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.376291 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c785dedf-5109-4e22-be36-b04a971c38e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.766989 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" event={"ID":"c785dedf-5109-4e22-be36-b04a971c38e0","Type":"ContainerDied","Data":"76d07c026b576b229a193a395c407346fc7f4c564a67d086d4cafa857b67e090"} Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.769943 4827 scope.go:117] "RemoveContainer" containerID="a369dc7bd24b4a9bcfbff13b419847ec275d36d21179748465e7de99a7371919" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.767180 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vjjcm" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.811104 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.818671 4827 scope.go:117] "RemoveContainer" containerID="bf50f9d76288974488855c1fb16667becb32f72d49f81db12ad465527748b020" Jan 26 09:24:03 crc kubenswrapper[4827]: I0126 09:24:03.824042 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vjjcm"] Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.182416 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.303887 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jm9w\" (UniqueName: \"kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w\") pod \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.304001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts\") pod \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\" (UID: \"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.305308 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" (UID: "83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.323737 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w" (OuterVolumeSpecName: "kube-api-access-8jm9w") pod "83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" (UID: "83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e"). InnerVolumeSpecName "kube-api-access-8jm9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.387887 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.408870 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.408927 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jm9w\" (UniqueName: \"kubernetes.io/projected/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e-kube-api-access-8jm9w\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.459180 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.467049 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.478125 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.495730 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.511232 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghg58\" (UniqueName: \"kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58\") pod \"ce90226f-115c-42f6-bf51-1451a31d647c\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.511899 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts\") pod \"ce90226f-115c-42f6-bf51-1451a31d647c\" (UID: \"ce90226f-115c-42f6-bf51-1451a31d647c\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.512528 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce90226f-115c-42f6-bf51-1451a31d647c" (UID: "ce90226f-115c-42f6-bf51-1451a31d647c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.518008 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58" (OuterVolumeSpecName: "kube-api-access-ghg58") pod "ce90226f-115c-42f6-bf51-1451a31d647c" (UID: "ce90226f-115c-42f6-bf51-1451a31d647c"). InnerVolumeSpecName "kube-api-access-ghg58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.613204 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts\") pod \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.613250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg2q7\" (UniqueName: \"kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7\") pod \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\" (UID: \"8b379f41-b9fc-49f0-a30e-fb5611d5f043\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.613714 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b379f41-b9fc-49f0-a30e-fb5611d5f043" (UID: "8b379f41-b9fc-49f0-a30e-fb5611d5f043"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.613962 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts\") pod \"4801aa52-328a-4300-bc01-6c9a5455395e\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.614366 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4801aa52-328a-4300-bc01-6c9a5455395e" (UID: "4801aa52-328a-4300-bc01-6c9a5455395e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.614418 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts\") pod \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.614437 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnvn4\" (UniqueName: \"kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4\") pod \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.614811 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdad4441-f327-4a21-8c0a-7f86ac1df4b4" (UID: "cdad4441-f327-4a21-8c0a-7f86ac1df4b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.614869 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts\") pod \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\" (UID: \"e5c91b0d-9e51-4065-9eee-fde2e4971bf4\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.615246 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c91b0d-9e51-4065-9eee-fde2e4971bf4" (UID: "e5c91b0d-9e51-4065-9eee-fde2e4971bf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.615329 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54zbp\" (UniqueName: \"kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp\") pod \"4801aa52-328a-4300-bc01-6c9a5455395e\" (UID: \"4801aa52-328a-4300-bc01-6c9a5455395e\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.615872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxln7\" (UniqueName: \"kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7\") pod \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\" (UID: \"cdad4441-f327-4a21-8c0a-7f86ac1df4b4\") " Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7" (OuterVolumeSpecName: "kube-api-access-gg2q7") pod "8b379f41-b9fc-49f0-a30e-fb5611d5f043" (UID: "8b379f41-b9fc-49f0-a30e-fb5611d5f043"). InnerVolumeSpecName "kube-api-access-gg2q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616560 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce90226f-115c-42f6-bf51-1451a31d647c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616572 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b379f41-b9fc-49f0-a30e-fb5611d5f043-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616581 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg2q7\" (UniqueName: \"kubernetes.io/projected/8b379f41-b9fc-49f0-a30e-fb5611d5f043-kube-api-access-gg2q7\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616591 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghg58\" (UniqueName: \"kubernetes.io/projected/ce90226f-115c-42f6-bf51-1451a31d647c-kube-api-access-ghg58\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616599 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4801aa52-328a-4300-bc01-6c9a5455395e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616669 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.616680 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.617479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4" (OuterVolumeSpecName: "kube-api-access-wnvn4") pod "e5c91b0d-9e51-4065-9eee-fde2e4971bf4" (UID: "e5c91b0d-9e51-4065-9eee-fde2e4971bf4"). InnerVolumeSpecName "kube-api-access-wnvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.617853 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp" (OuterVolumeSpecName: "kube-api-access-54zbp") pod "4801aa52-328a-4300-bc01-6c9a5455395e" (UID: "4801aa52-328a-4300-bc01-6c9a5455395e"). InnerVolumeSpecName "kube-api-access-54zbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.618885 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7" (OuterVolumeSpecName: "kube-api-access-fxln7") pod "cdad4441-f327-4a21-8c0a-7f86ac1df4b4" (UID: "cdad4441-f327-4a21-8c0a-7f86ac1df4b4"). InnerVolumeSpecName "kube-api-access-fxln7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.718165 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnvn4\" (UniqueName: \"kubernetes.io/projected/e5c91b0d-9e51-4065-9eee-fde2e4971bf4-kube-api-access-wnvn4\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.718533 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54zbp\" (UniqueName: \"kubernetes.io/projected/4801aa52-328a-4300-bc01-6c9a5455395e-kube-api-access-54zbp\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.718548 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxln7\" (UniqueName: \"kubernetes.io/projected/cdad4441-f327-4a21-8c0a-7f86ac1df4b4-kube-api-access-fxln7\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.776915 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ckrtf" event={"ID":"8b379f41-b9fc-49f0-a30e-fb5611d5f043","Type":"ContainerDied","Data":"d1c60c5ffb6c2c89b7ad49f357b2eac1968af9a96c0fed3bab69dfc474b0de95"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.776947 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ckrtf" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.776962 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c60c5ffb6c2c89b7ad49f357b2eac1968af9a96c0fed3bab69dfc474b0de95" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.778100 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-62d5-account-create-update-q6kbw" event={"ID":"4801aa52-328a-4300-bc01-6c9a5455395e","Type":"ContainerDied","Data":"594faa2426d423bd6a3f150c3f6c43df374d537fe543ea5afe3c4eb64fddd7c7"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.778125 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594faa2426d423bd6a3f150c3f6c43df374d537fe543ea5afe3c4eb64fddd7c7" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.778186 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-62d5-account-create-update-q6kbw" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.779935 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be69-account-create-update-vh7s6" event={"ID":"e5c91b0d-9e51-4065-9eee-fde2e4971bf4","Type":"ContainerDied","Data":"4dc79ae3d61bbf93149ae8b802b27766cea7f4fc3a8aa085c67f5936b48c02f3"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.779962 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc79ae3d61bbf93149ae8b802b27766cea7f4fc3a8aa085c67f5936b48c02f3" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.780008 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be69-account-create-update-vh7s6" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.786212 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5s5pt" event={"ID":"cdad4441-f327-4a21-8c0a-7f86ac1df4b4","Type":"ContainerDied","Data":"f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.786255 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0febc40dca52420ecc060ed27d71e7c708d16d31374e3398d103f8b0b7e2963" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.786310 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5s5pt" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.795103 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-430e-account-create-update-c2ksl" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.795122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-430e-account-create-update-c2ksl" event={"ID":"ce90226f-115c-42f6-bf51-1451a31d647c","Type":"ContainerDied","Data":"96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.795904 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96aa60f3609e03ec126161b0955dc6eb4f7cf5fa2b89abc9f8cc9d7bc3cbdac9" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.805188 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lwvw2" event={"ID":"83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e","Type":"ContainerDied","Data":"ac5135609487db23b22c1470a9c76ecd64499acd671cf2028f5bfdec5f83602b"} Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.805227 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lwvw2" Jan 26 09:24:04 crc kubenswrapper[4827]: I0126 09:24:04.805231 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5135609487db23b22c1470a9c76ecd64499acd671cf2028f5bfdec5f83602b" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.714675 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" path="/var/lib/kubelet/pods/c785dedf-5109-4e22-be36-b04a971c38e0/volumes" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947129 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jhkcw"] Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947490 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947507 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947539 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce90226f-115c-42f6-bf51-1451a31d647c" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947547 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce90226f-115c-42f6-bf51-1451a31d647c" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947564 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4801aa52-328a-4300-bc01-6c9a5455395e" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947572 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4801aa52-328a-4300-bc01-6c9a5455395e" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947589 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdad4441-f327-4a21-8c0a-7f86ac1df4b4" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947596 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdad4441-f327-4a21-8c0a-7f86ac1df4b4" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947605 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="init" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947612 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="init" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947620 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="dnsmasq-dns" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947627 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="dnsmasq-dns" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947662 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c91b0d-9e51-4065-9eee-fde2e4971bf4" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947671 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c91b0d-9e51-4065-9eee-fde2e4971bf4" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: E0126 09:24:05.947683 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b379f41-b9fc-49f0-a30e-fb5611d5f043" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947692 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b379f41-b9fc-49f0-a30e-fb5611d5f043" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947873 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c91b0d-9e51-4065-9eee-fde2e4971bf4" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947894 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdad4441-f327-4a21-8c0a-7f86ac1df4b4" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947906 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c785dedf-5109-4e22-be36-b04a971c38e0" containerName="dnsmasq-dns" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947916 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce90226f-115c-42f6-bf51-1451a31d647c" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947932 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947942 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4801aa52-328a-4300-bc01-6c9a5455395e" containerName="mariadb-account-create-update" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.947952 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b379f41-b9fc-49f0-a30e-fb5611d5f043" containerName="mariadb-database-create" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.948539 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.950908 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.951677 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ksktd" Jan 26 09:24:05 crc kubenswrapper[4827]: I0126 09:24:05.960220 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jhkcw"] Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.138926 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.138971 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9mt\" (UniqueName: \"kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.139051 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.139083 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.240713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.240786 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.240846 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.240875 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9mt\" (UniqueName: \"kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.254459 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.254877 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.256347 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.257199 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9mt\" (UniqueName: \"kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt\") pod \"glance-db-sync-jhkcw\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.265255 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.781927 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jhkcw"] Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.819135 4827 generic.go:334] "Generic (PLEG): container finished" podID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerID="bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842" exitCode=0 Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.819218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerDied","Data":"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842"} Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.824969 4827 generic.go:334] "Generic (PLEG): container finished" podID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerID="101e5bc11ac55692fe30de69f45ef6185d49a52aaaf7e4c37c5ebe506a0a1297" exitCode=0 Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.825065 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerDied","Data":"101e5bc11ac55692fe30de69f45ef6185d49a52aaaf7e4c37c5ebe506a0a1297"} Jan 26 09:24:06 crc kubenswrapper[4827]: I0126 09:24:06.825975 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhkcw" event={"ID":"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f","Type":"ContainerStarted","Data":"4932b0f00b34b317eef618d8022deafc96eae63d53fb3ff2c450865fe3dfd490"} Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.352098 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f2cms"] Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.383258 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.393119 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.406266 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2cms"] Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.472945 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fj2j\" (UniqueName: \"kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.473027 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.574622 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fj2j\" (UniqueName: \"kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.574763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.575595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.593580 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fj2j\" (UniqueName: \"kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j\") pod \"root-account-create-update-f2cms\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.725017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.835324 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerStarted","Data":"358f813ad5378dbe46f27f966c54ce92ed4b5421cdbfbedcab4e4c646d8d3c43"} Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.835794 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.842577 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerStarted","Data":"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99"} Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.843169 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.896080 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.177332382 podStartE2EDuration="52.896057886s" podCreationTimestamp="2026-01-26 09:23:15 +0000 UTC" firstStartedPulling="2026-01-26 09:23:17.802099988 +0000 UTC m=+1026.450771807" lastFinishedPulling="2026-01-26 09:23:33.520825492 +0000 UTC m=+1042.169497311" observedRunningTime="2026-01-26 09:24:07.892350956 +0000 UTC m=+1076.541022775" watchObservedRunningTime="2026-01-26 09:24:07.896057886 +0000 UTC m=+1076.544729705" Jan 26 09:24:07 crc kubenswrapper[4827]: I0126 09:24:07.897244 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.349917873 podStartE2EDuration="51.897233908s" podCreationTimestamp="2026-01-26 09:23:16 +0000 UTC" firstStartedPulling="2026-01-26 09:23:18.066152974 +0000 UTC m=+1026.714824793" lastFinishedPulling="2026-01-26 09:23:33.613469009 +0000 UTC m=+1042.262140828" observedRunningTime="2026-01-26 09:24:07.869101657 +0000 UTC m=+1076.517773476" watchObservedRunningTime="2026-01-26 09:24:07.897233908 +0000 UTC m=+1076.545905727" Jan 26 09:24:08 crc kubenswrapper[4827]: I0126 09:24:08.283300 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2cms"] Jan 26 09:24:08 crc kubenswrapper[4827]: W0126 09:24:08.290618 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc98232_7bf1_47ff_af6b_0f1178fb3a18.slice/crio-e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04 WatchSource:0}: Error finding container e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04: Status 404 returned error can't find the container with id e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04 Jan 26 09:24:08 crc kubenswrapper[4827]: I0126 09:24:08.851612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2cms" event={"ID":"2dc98232-7bf1-47ff-af6b-0f1178fb3a18","Type":"ContainerStarted","Data":"ffe0dc435e87cd81b51b6d32448bfdf905d104d3e9c2dc6f108e78ea68d586cb"} Jan 26 09:24:08 crc kubenswrapper[4827]: I0126 09:24:08.851937 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2cms" event={"ID":"2dc98232-7bf1-47ff-af6b-0f1178fb3a18","Type":"ContainerStarted","Data":"e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04"} Jan 26 09:24:08 crc kubenswrapper[4827]: I0126 09:24:08.869623 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-f2cms" podStartSLOduration=1.8696036390000002 podStartE2EDuration="1.869603639s" podCreationTimestamp="2026-01-26 09:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:08.867404709 +0000 UTC m=+1077.516076528" watchObservedRunningTime="2026-01-26 09:24:08.869603639 +0000 UTC m=+1077.518275458" Jan 26 09:24:09 crc kubenswrapper[4827]: I0126 09:24:09.860789 4827 generic.go:334] "Generic (PLEG): container finished" podID="2dc98232-7bf1-47ff-af6b-0f1178fb3a18" containerID="ffe0dc435e87cd81b51b6d32448bfdf905d104d3e9c2dc6f108e78ea68d586cb" exitCode=0 Jan 26 09:24:09 crc kubenswrapper[4827]: I0126 09:24:09.860881 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2cms" event={"ID":"2dc98232-7bf1-47ff-af6b-0f1178fb3a18","Type":"ContainerDied","Data":"ffe0dc435e87cd81b51b6d32448bfdf905d104d3e9c2dc6f108e78ea68d586cb"} Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.272530 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.340953 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fj2j\" (UniqueName: \"kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j\") pod \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.341062 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts\") pod \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\" (UID: \"2dc98232-7bf1-47ff-af6b-0f1178fb3a18\") " Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.349223 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dc98232-7bf1-47ff-af6b-0f1178fb3a18" (UID: "2dc98232-7bf1-47ff-af6b-0f1178fb3a18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.377073 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j" (OuterVolumeSpecName: "kube-api-access-2fj2j") pod "2dc98232-7bf1-47ff-af6b-0f1178fb3a18" (UID: "2dc98232-7bf1-47ff-af6b-0f1178fb3a18"). InnerVolumeSpecName "kube-api-access-2fj2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.445174 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fj2j\" (UniqueName: \"kubernetes.io/projected/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-kube-api-access-2fj2j\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.445223 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc98232-7bf1-47ff-af6b-0f1178fb3a18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.878223 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2cms" event={"ID":"2dc98232-7bf1-47ff-af6b-0f1178fb3a18","Type":"ContainerDied","Data":"e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04"} Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.878259 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e128dbd5c773e19e203d3f52630461f1220795c4cc8de0c48920f05f260c6b04" Jan 26 09:24:11 crc kubenswrapper[4827]: I0126 09:24:11.878265 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2cms" Jan 26 09:24:13 crc kubenswrapper[4827]: I0126 09:24:13.690267 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f2cms"] Jan 26 09:24:13 crc kubenswrapper[4827]: I0126 09:24:13.704266 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f2cms"] Jan 26 09:24:13 crc kubenswrapper[4827]: I0126 09:24:13.712710 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc98232-7bf1-47ff-af6b-0f1178fb3a18" path="/var/lib/kubelet/pods/2dc98232-7bf1-47ff-af6b-0f1178fb3a18/volumes" Jan 26 09:24:14 crc kubenswrapper[4827]: I0126 09:24:14.253187 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 09:24:15 crc kubenswrapper[4827]: I0126 09:24:15.363865 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sjsvm" podUID="60184b1a-f656-4b71-bf13-2953f715bc12" containerName="ovn-controller" probeResult="failure" output=< Jan 26 09:24:15 crc kubenswrapper[4827]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 09:24:15 crc kubenswrapper[4827]: > Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.165985 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.506083 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fkrg2"] Jan 26 09:24:17 crc kubenswrapper[4827]: E0126 09:24:17.506802 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc98232-7bf1-47ff-af6b-0f1178fb3a18" containerName="mariadb-account-create-update" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.506827 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc98232-7bf1-47ff-af6b-0f1178fb3a18" containerName="mariadb-account-create-update" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.507006 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc98232-7bf1-47ff-af6b-0f1178fb3a18" containerName="mariadb-account-create-update" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.507658 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.516571 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.525927 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fkrg2"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.647142 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68p6\" (UniqueName: \"kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.647350 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.754681 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68p6\" (UniqueName: \"kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.754824 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.758255 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.774492 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-57bw5"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.780755 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6b35-account-create-update-gsp2f"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.784453 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.814207 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-57bw5"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.814245 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6b35-account-create-update-gsp2f"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.814336 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.827779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68p6\" (UniqueName: \"kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6\") pod \"cinder-db-create-fkrg2\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.845256 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.858746 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f048-account-create-update-fnbq9"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.859948 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.886385 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f048-account-create-update-fnbq9"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.886855 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.939078 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-v9zpk"] Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.940606 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962725 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962830 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962849 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwdk\" (UniqueName: \"kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962885 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfk4\" (UniqueName: \"kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962923 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxjt\" (UniqueName: \"kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.962962 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:17 crc kubenswrapper[4827]: I0126 09:24:17.970348 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-v9zpk"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.026581 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0fe9-account-create-update-rx7j2"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.027941 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.033056 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0fe9-account-create-update-rx7j2"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.033099 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065039 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfk4\" (UniqueName: \"kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065127 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxjt\" (UniqueName: \"kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065181 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dmj\" (UniqueName: \"kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065219 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065322 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065357 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065402 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.065427 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwdk\" (UniqueName: \"kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.066031 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.066627 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.066724 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.093320 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfk4\" (UniqueName: \"kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4\") pod \"cinder-f048-account-create-update-fnbq9\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.093924 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwdk\" (UniqueName: \"kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk\") pod \"barbican-6b35-account-create-update-gsp2f\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.099525 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxjt\" (UniqueName: \"kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt\") pod \"barbican-db-create-57bw5\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.121385 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-lw6v4"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.122661 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.122864 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.128276 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cg6fq" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.128503 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.128699 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.131117 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.149012 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lw6v4"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.167625 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jbg\" (UniqueName: \"kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.167770 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4dmj\" (UniqueName: \"kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.167901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.167963 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.169094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.200078 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.215396 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4dmj\" (UniqueName: \"kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj\") pod \"neutron-db-create-v9zpk\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.223577 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.237599 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.256923 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269171 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269219 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269260 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jbg\" (UniqueName: \"kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269325 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hlx\" (UniqueName: \"kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.269864 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.304838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jbg\" (UniqueName: \"kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg\") pod \"neutron-0fe9-account-create-update-rx7j2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.349902 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.370682 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.370750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hlx\" (UniqueName: \"kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.370799 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.374028 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.376130 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.389783 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hlx\" (UniqueName: \"kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx\") pod \"keystone-db-sync-lw6v4\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.485524 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.790508 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7qgkw"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.792903 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.796659 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.812095 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7qgkw"] Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.878278 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9hv\" (UniqueName: \"kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.878322 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.979582 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl9hv\" (UniqueName: \"kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.979660 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:18 crc kubenswrapper[4827]: I0126 09:24:18.980511 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:19 crc kubenswrapper[4827]: I0126 09:24:18.994688 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl9hv\" (UniqueName: \"kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv\") pod \"root-account-create-update-7qgkw\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:19 crc kubenswrapper[4827]: I0126 09:24:19.117114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.362453 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sjsvm" podUID="60184b1a-f656-4b71-bf13-2953f715bc12" containerName="ovn-controller" probeResult="failure" output=< Jan 26 09:24:20 crc kubenswrapper[4827]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 09:24:20 crc kubenswrapper[4827]: > Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.443200 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.449617 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6jn8j" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.707538 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sjsvm-config-bkwr5"] Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.708851 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.711727 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.735188 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sjsvm-config-bkwr5"] Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.830832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.830904 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.830947 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.830972 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.831051 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.831075 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tthf\" (UniqueName: \"kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932185 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932235 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tthf\" (UniqueName: \"kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932292 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932326 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932355 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932371 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932548 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.932631 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.933161 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.934334 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:20 crc kubenswrapper[4827]: I0126 09:24:20.952270 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tthf\" (UniqueName: \"kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf\") pod \"ovn-controller-sjsvm-config-bkwr5\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:21 crc kubenswrapper[4827]: I0126 09:24:21.032066 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.765206 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fkrg2"] Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.780706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6b35-account-create-update-gsp2f"] Jan 26 09:24:22 crc kubenswrapper[4827]: W0126 09:24:22.799924 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6985b23f_09b2_473e_bdbf_b0c115a93ca0.slice/crio-db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e WatchSource:0}: Error finding container db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e: Status 404 returned error can't find the container with id db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.937566 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-v9zpk"] Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.955730 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f048-account-create-update-fnbq9"] Jan 26 09:24:22 crc kubenswrapper[4827]: W0126 09:24:22.957997 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a210d5f_6a79_48a8_90af_ccef43549ff7.slice/crio-76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d WatchSource:0}: Error finding container 76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d: Status 404 returned error can't find the container with id 76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.963551 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sjsvm-config-bkwr5"] Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.973242 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6b35-account-create-update-gsp2f" event={"ID":"6985b23f-09b2-473e-bdbf-b0c115a93ca0","Type":"ContainerStarted","Data":"db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e"} Jan 26 09:24:22 crc kubenswrapper[4827]: W0126 09:24:22.973581 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68bb7463_e3b4_42cd_ba68_4a2361ec5a6a.slice/crio-93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a WatchSource:0}: Error finding container 93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a: Status 404 returned error can't find the container with id 93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a Jan 26 09:24:22 crc kubenswrapper[4827]: W0126 09:24:22.974996 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5641f3e6_baea_4786_bb73_175101a77bc5.slice/crio-ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6 WatchSource:0}: Error finding container ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6: Status 404 returned error can't find the container with id ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6 Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.975841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fkrg2" event={"ID":"1191821d-a29f-4a52-ae1a-29659e28f5dc","Type":"ContainerStarted","Data":"ae85bdec512f259ed42b3efea2735fc4c942608bae77546d946249e86debad35"} Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.977733 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7qgkw"] Jan 26 09:24:22 crc kubenswrapper[4827]: I0126 09:24:22.986900 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-57bw5"] Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.156947 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lw6v4"] Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.176128 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0fe9-account-create-update-rx7j2"] Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.986308 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f048-account-create-update-fnbq9" event={"ID":"2a210d5f-6a79-48a8-90af-ccef43549ff7","Type":"ContainerStarted","Data":"ba99c090fce3cd5b0b65e153f5e47e3bb52ba7ea93b81575ea88018eca5c766a"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.987277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f048-account-create-update-fnbq9" event={"ID":"2a210d5f-6a79-48a8-90af-ccef43549ff7","Type":"ContainerStarted","Data":"76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.989130 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7qgkw" event={"ID":"5641f3e6-baea-4786-bb73-175101a77bc5","Type":"ContainerStarted","Data":"81d40b592e87126d4d2d8070bf2799e87fe6dd8a05fae31fb4bf3e854fd9d3d7"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.989167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7qgkw" event={"ID":"5641f3e6-baea-4786-bb73-175101a77bc5","Type":"ContainerStarted","Data":"ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.993051 4827 generic.go:334] "Generic (PLEG): container finished" podID="6985b23f-09b2-473e-bdbf-b0c115a93ca0" containerID="c29bf2bb2c49ad2b9a484db9c7fd7d8a730c515820a8833fb8486c474fd9cd0b" exitCode=0 Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.993121 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6b35-account-create-update-gsp2f" event={"ID":"6985b23f-09b2-473e-bdbf-b0c115a93ca0","Type":"ContainerDied","Data":"c29bf2bb2c49ad2b9a484db9c7fd7d8a730c515820a8833fb8486c474fd9cd0b"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.996064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-57bw5" event={"ID":"134e2635-96c4-478c-85c3-1bf9b44ad38a","Type":"ContainerStarted","Data":"49f9062a2d93d9d163cd534025748ed6f8f02831b5f9332a96661688b0d9fa03"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.996105 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-57bw5" event={"ID":"134e2635-96c4-478c-85c3-1bf9b44ad38a","Type":"ContainerStarted","Data":"80d328834734a1cd303b124b93ddc41e7b493d91c90e29cb92d8b7b80bdfc052"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.999761 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0fe9-account-create-update-rx7j2" event={"ID":"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2","Type":"ContainerStarted","Data":"6a681098e011dcbdc20433aa2fd95d73e73ccb51f87c0ce0f7bdbc8723729c33"} Jan 26 09:24:23 crc kubenswrapper[4827]: I0126 09:24:23.999848 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0fe9-account-create-update-rx7j2" event={"ID":"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2","Type":"ContainerStarted","Data":"a544efc6c3510a506233891a14f876665c45c453e454b76cfb116b5b4da044c9"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.003727 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f048-account-create-update-fnbq9" podStartSLOduration=7.003712316 podStartE2EDuration="7.003712316s" podCreationTimestamp="2026-01-26 09:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:24.002873134 +0000 UTC m=+1092.651544953" watchObservedRunningTime="2026-01-26 09:24:24.003712316 +0000 UTC m=+1092.652384135" Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.011021 4827 generic.go:334] "Generic (PLEG): container finished" podID="1191821d-a29f-4a52-ae1a-29659e28f5dc" containerID="29ac3a728c102200cedf2ca549bae9dfbe01457148d20019cec2b82b5cc05f34" exitCode=0 Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.011217 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fkrg2" event={"ID":"1191821d-a29f-4a52-ae1a-29659e28f5dc","Type":"ContainerDied","Data":"29ac3a728c102200cedf2ca549bae9dfbe01457148d20019cec2b82b5cc05f34"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.023341 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v9zpk" event={"ID":"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a","Type":"ContainerStarted","Data":"6d422c6a9f1687d2d183b55253fad896196a9d2dd6819444da6c0d8308fdfc67"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.023384 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v9zpk" event={"ID":"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a","Type":"ContainerStarted","Data":"93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.027179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm-config-bkwr5" event={"ID":"cea8bf5b-01f3-46d7-95d7-16a7c74412bb","Type":"ContainerStarted","Data":"bd8dc6f0d043ed7d40d6eca2f1fe2642cae1c36669b4c7e467562cbe17dbd00e"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.027246 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm-config-bkwr5" event={"ID":"cea8bf5b-01f3-46d7-95d7-16a7c74412bb","Type":"ContainerStarted","Data":"d91ad27b90389bbe85a98318de8387ab3dfaa610a7c9378dac89b9c2a321d5d9"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.031147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhkcw" event={"ID":"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f","Type":"ContainerStarted","Data":"061d7e07c7f95da73bfbe962cfae08fba1a231fc1b653b6e25e09b468d261e8a"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.039291 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lw6v4" event={"ID":"203375e2-091f-4589-a8a6-e12f7af8a24d","Type":"ContainerStarted","Data":"2bc519ab8bc27d6975b7276bea9ed7f904d47b9d204801d1f42eeacdf880ddcb"} Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.094884 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0fe9-account-create-update-rx7j2" podStartSLOduration=7.094861433 podStartE2EDuration="7.094861433s" podCreationTimestamp="2026-01-26 09:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:24.064947073 +0000 UTC m=+1092.713618892" watchObservedRunningTime="2026-01-26 09:24:24.094861433 +0000 UTC m=+1092.743533252" Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.097280 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-7qgkw" podStartSLOduration=6.097266437 podStartE2EDuration="6.097266437s" podCreationTimestamp="2026-01-26 09:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:24.092329114 +0000 UTC m=+1092.741000933" watchObservedRunningTime="2026-01-26 09:24:24.097266437 +0000 UTC m=+1092.745938256" Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.111135 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jhkcw" podStartSLOduration=3.578679126 podStartE2EDuration="19.111111852s" podCreationTimestamp="2026-01-26 09:24:05 +0000 UTC" firstStartedPulling="2026-01-26 09:24:06.761198249 +0000 UTC m=+1075.409870068" lastFinishedPulling="2026-01-26 09:24:22.293630975 +0000 UTC m=+1090.942302794" observedRunningTime="2026-01-26 09:24:24.107763892 +0000 UTC m=+1092.756435721" watchObservedRunningTime="2026-01-26 09:24:24.111111852 +0000 UTC m=+1092.759783671" Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.139470 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sjsvm-config-bkwr5" podStartSLOduration=4.139447509 podStartE2EDuration="4.139447509s" podCreationTimestamp="2026-01-26 09:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:24.12839855 +0000 UTC m=+1092.777070369" watchObservedRunningTime="2026-01-26 09:24:24.139447509 +0000 UTC m=+1092.788119328" Jan 26 09:24:24 crc kubenswrapper[4827]: I0126 09:24:24.151765 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-v9zpk" podStartSLOduration=7.151740822 podStartE2EDuration="7.151740822s" podCreationTimestamp="2026-01-26 09:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:24.145451382 +0000 UTC m=+1092.794123201" watchObservedRunningTime="2026-01-26 09:24:24.151740822 +0000 UTC m=+1092.800412641" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.050629 4827 generic.go:334] "Generic (PLEG): container finished" podID="cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" containerID="6a681098e011dcbdc20433aa2fd95d73e73ccb51f87c0ce0f7bdbc8723729c33" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.050770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0fe9-account-create-update-rx7j2" event={"ID":"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2","Type":"ContainerDied","Data":"6a681098e011dcbdc20433aa2fd95d73e73ccb51f87c0ce0f7bdbc8723729c33"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.053500 4827 generic.go:334] "Generic (PLEG): container finished" podID="2a210d5f-6a79-48a8-90af-ccef43549ff7" containerID="ba99c090fce3cd5b0b65e153f5e47e3bb52ba7ea93b81575ea88018eca5c766a" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.053588 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f048-account-create-update-fnbq9" event={"ID":"2a210d5f-6a79-48a8-90af-ccef43549ff7","Type":"ContainerDied","Data":"ba99c090fce3cd5b0b65e153f5e47e3bb52ba7ea93b81575ea88018eca5c766a"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.055249 4827 generic.go:334] "Generic (PLEG): container finished" podID="5641f3e6-baea-4786-bb73-175101a77bc5" containerID="81d40b592e87126d4d2d8070bf2799e87fe6dd8a05fae31fb4bf3e854fd9d3d7" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.055288 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7qgkw" event={"ID":"5641f3e6-baea-4786-bb73-175101a77bc5","Type":"ContainerDied","Data":"81d40b592e87126d4d2d8070bf2799e87fe6dd8a05fae31fb4bf3e854fd9d3d7"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.056821 4827 generic.go:334] "Generic (PLEG): container finished" podID="68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" containerID="6d422c6a9f1687d2d183b55253fad896196a9d2dd6819444da6c0d8308fdfc67" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.056868 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v9zpk" event={"ID":"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a","Type":"ContainerDied","Data":"6d422c6a9f1687d2d183b55253fad896196a9d2dd6819444da6c0d8308fdfc67"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.058234 4827 generic.go:334] "Generic (PLEG): container finished" podID="cea8bf5b-01f3-46d7-95d7-16a7c74412bb" containerID="bd8dc6f0d043ed7d40d6eca2f1fe2642cae1c36669b4c7e467562cbe17dbd00e" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.058392 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm-config-bkwr5" event={"ID":"cea8bf5b-01f3-46d7-95d7-16a7c74412bb","Type":"ContainerDied","Data":"bd8dc6f0d043ed7d40d6eca2f1fe2642cae1c36669b4c7e467562cbe17dbd00e"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.060488 4827 generic.go:334] "Generic (PLEG): container finished" podID="134e2635-96c4-478c-85c3-1bf9b44ad38a" containerID="49f9062a2d93d9d163cd534025748ed6f8f02831b5f9332a96661688b0d9fa03" exitCode=0 Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.060765 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-57bw5" event={"ID":"134e2635-96c4-478c-85c3-1bf9b44ad38a","Type":"ContainerDied","Data":"49f9062a2d93d9d163cd534025748ed6f8f02831b5f9332a96661688b0d9fa03"} Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.394932 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sjsvm" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.472956 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.549494 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts\") pod \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.549558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cwdk\" (UniqueName: \"kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk\") pod \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\" (UID: \"6985b23f-09b2-473e-bdbf-b0c115a93ca0\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.551000 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6985b23f-09b2-473e-bdbf-b0c115a93ca0" (UID: "6985b23f-09b2-473e-bdbf-b0c115a93ca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.558701 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk" (OuterVolumeSpecName: "kube-api-access-6cwdk") pod "6985b23f-09b2-473e-bdbf-b0c115a93ca0" (UID: "6985b23f-09b2-473e-bdbf-b0c115a93ca0"). InnerVolumeSpecName "kube-api-access-6cwdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.628439 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.633988 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.653903 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts\") pod \"1191821d-a29f-4a52-ae1a-29659e28f5dc\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.653965 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68p6\" (UniqueName: \"kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6\") pod \"1191821d-a29f-4a52-ae1a-29659e28f5dc\" (UID: \"1191821d-a29f-4a52-ae1a-29659e28f5dc\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.654060 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts\") pod \"134e2635-96c4-478c-85c3-1bf9b44ad38a\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.654121 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxjt\" (UniqueName: \"kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt\") pod \"134e2635-96c4-478c-85c3-1bf9b44ad38a\" (UID: \"134e2635-96c4-478c-85c3-1bf9b44ad38a\") " Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.654481 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6985b23f-09b2-473e-bdbf-b0c115a93ca0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.654496 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cwdk\" (UniqueName: \"kubernetes.io/projected/6985b23f-09b2-473e-bdbf-b0c115a93ca0-kube-api-access-6cwdk\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.660235 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6" (OuterVolumeSpecName: "kube-api-access-x68p6") pod "1191821d-a29f-4a52-ae1a-29659e28f5dc" (UID: "1191821d-a29f-4a52-ae1a-29659e28f5dc"). InnerVolumeSpecName "kube-api-access-x68p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.660494 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "134e2635-96c4-478c-85c3-1bf9b44ad38a" (UID: "134e2635-96c4-478c-85c3-1bf9b44ad38a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.660530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1191821d-a29f-4a52-ae1a-29659e28f5dc" (UID: "1191821d-a29f-4a52-ae1a-29659e28f5dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.662228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt" (OuterVolumeSpecName: "kube-api-access-fdxjt") pod "134e2635-96c4-478c-85c3-1bf9b44ad38a" (UID: "134e2635-96c4-478c-85c3-1bf9b44ad38a"). InnerVolumeSpecName "kube-api-access-fdxjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.757053 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68p6\" (UniqueName: \"kubernetes.io/projected/1191821d-a29f-4a52-ae1a-29659e28f5dc-kube-api-access-x68p6\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.757085 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/134e2635-96c4-478c-85c3-1bf9b44ad38a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.757097 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxjt\" (UniqueName: \"kubernetes.io/projected/134e2635-96c4-478c-85c3-1bf9b44ad38a-kube-api-access-fdxjt\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:25 crc kubenswrapper[4827]: I0126 09:24:25.757109 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1191821d-a29f-4a52-ae1a-29659e28f5dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.073222 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6b35-account-create-update-gsp2f" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.073220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6b35-account-create-update-gsp2f" event={"ID":"6985b23f-09b2-473e-bdbf-b0c115a93ca0","Type":"ContainerDied","Data":"db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e"} Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.073342 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3081365f0a3ed0e6bcdae074d68a758735f2e8ddbfa57025b7c2f9c69d2e3e" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.076917 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-57bw5" event={"ID":"134e2635-96c4-478c-85c3-1bf9b44ad38a","Type":"ContainerDied","Data":"80d328834734a1cd303b124b93ddc41e7b493d91c90e29cb92d8b7b80bdfc052"} Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.076956 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80d328834734a1cd303b124b93ddc41e7b493d91c90e29cb92d8b7b80bdfc052" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.076968 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-57bw5" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.080173 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fkrg2" Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.080209 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fkrg2" event={"ID":"1191821d-a29f-4a52-ae1a-29659e28f5dc","Type":"ContainerDied","Data":"ae85bdec512f259ed42b3efea2735fc4c942608bae77546d946249e86debad35"} Jan 26 09:24:26 crc kubenswrapper[4827]: I0126 09:24:26.080229 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae85bdec512f259ed42b3efea2735fc4c942608bae77546d946249e86debad35" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.685869 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.694064 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.731001 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.750460 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.758028 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.820324 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts\") pod \"5641f3e6-baea-4786-bb73-175101a77bc5\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.820416 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dmj\" (UniqueName: \"kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj\") pod \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.820461 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl9hv\" (UniqueName: \"kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv\") pod \"5641f3e6-baea-4786-bb73-175101a77bc5\" (UID: \"5641f3e6-baea-4786-bb73-175101a77bc5\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.820527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts\") pod \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\" (UID: \"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.823742 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5641f3e6-baea-4786-bb73-175101a77bc5" (UID: "5641f3e6-baea-4786-bb73-175101a77bc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.825114 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" (UID: "68bb7463-e3b4-42cd-ba68-4a2361ec5a6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.832913 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv" (OuterVolumeSpecName: "kube-api-access-vl9hv") pod "5641f3e6-baea-4786-bb73-175101a77bc5" (UID: "5641f3e6-baea-4786-bb73-175101a77bc5"). InnerVolumeSpecName "kube-api-access-vl9hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.835165 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj" (OuterVolumeSpecName: "kube-api-access-q4dmj") pod "68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" (UID: "68bb7463-e3b4-42cd-ba68-4a2361ec5a6a"). InnerVolumeSpecName "kube-api-access-q4dmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925566 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jbg\" (UniqueName: \"kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg\") pod \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925746 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts\") pod \"2a210d5f-6a79-48a8-90af-ccef43549ff7\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925797 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925840 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925879 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfk4\" (UniqueName: \"kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4\") pod \"2a210d5f-6a79-48a8-90af-ccef43549ff7\" (UID: \"2a210d5f-6a79-48a8-90af-ccef43549ff7\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.925990 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926068 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tthf\" (UniqueName: \"kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926114 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts\") pod \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\" (UID: \"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926158 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926237 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn\") pod \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\" (UID: \"cea8bf5b-01f3-46d7-95d7-16a7c74412bb\") " Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926748 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl9hv\" (UniqueName: \"kubernetes.io/projected/5641f3e6-baea-4786-bb73-175101a77bc5-kube-api-access-vl9hv\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926780 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926806 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5641f3e6-baea-4786-bb73-175101a77bc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926824 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4dmj\" (UniqueName: \"kubernetes.io/projected/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a-kube-api-access-q4dmj\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.926899 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.927817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.928251 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.928304 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run" (OuterVolumeSpecName: "var-run") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.928299 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" (UID: "cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.928361 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a210d5f-6a79-48a8-90af-ccef43549ff7" (UID: "2a210d5f-6a79-48a8-90af-ccef43549ff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.929059 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts" (OuterVolumeSpecName: "scripts") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.930888 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4" (OuterVolumeSpecName: "kube-api-access-dwfk4") pod "2a210d5f-6a79-48a8-90af-ccef43549ff7" (UID: "2a210d5f-6a79-48a8-90af-ccef43549ff7"). InnerVolumeSpecName "kube-api-access-dwfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.931923 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg" (OuterVolumeSpecName: "kube-api-access-w8jbg") pod "cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" (UID: "cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2"). InnerVolumeSpecName "kube-api-access-w8jbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:28 crc kubenswrapper[4827]: I0126 09:24:28.932352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf" (OuterVolumeSpecName: "kube-api-access-6tthf") pod "cea8bf5b-01f3-46d7-95d7-16a7c74412bb" (UID: "cea8bf5b-01f3-46d7-95d7-16a7c74412bb"). InnerVolumeSpecName "kube-api-access-6tthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028128 4827 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028167 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tthf\" (UniqueName: \"kubernetes.io/projected/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-kube-api-access-6tthf\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028182 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028196 4827 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028207 4827 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028219 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jbg\" (UniqueName: \"kubernetes.io/projected/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2-kube-api-access-w8jbg\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028228 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a210d5f-6a79-48a8-90af-ccef43549ff7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028236 4827 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028244 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cea8bf5b-01f3-46d7-95d7-16a7c74412bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.028251 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfk4\" (UniqueName: \"kubernetes.io/projected/2a210d5f-6a79-48a8-90af-ccef43549ff7-kube-api-access-dwfk4\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.108975 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v9zpk" event={"ID":"68bb7463-e3b4-42cd-ba68-4a2361ec5a6a","Type":"ContainerDied","Data":"93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.109012 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f628ea75eba030e2625ee90e690435ea645dedc4b19cfb5e4ea8b0eade994a" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.109081 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v9zpk" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.113136 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sjsvm-config-bkwr5" event={"ID":"cea8bf5b-01f3-46d7-95d7-16a7c74412bb","Type":"ContainerDied","Data":"d91ad27b90389bbe85a98318de8387ab3dfaa610a7c9378dac89b9c2a321d5d9"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.113167 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91ad27b90389bbe85a98318de8387ab3dfaa610a7c9378dac89b9c2a321d5d9" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.113209 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sjsvm-config-bkwr5" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.114780 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0fe9-account-create-update-rx7j2" event={"ID":"cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2","Type":"ContainerDied","Data":"a544efc6c3510a506233891a14f876665c45c453e454b76cfb116b5b4da044c9"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.114804 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a544efc6c3510a506233891a14f876665c45c453e454b76cfb116b5b4da044c9" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.114837 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0fe9-account-create-update-rx7j2" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.123151 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lw6v4" event={"ID":"203375e2-091f-4589-a8a6-e12f7af8a24d","Type":"ContainerStarted","Data":"ecdd15148e8426f5eb56adfd2f83fa8d4461abe94fb559cd7ead41ebac5fc4d9"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.126879 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f048-account-create-update-fnbq9" event={"ID":"2a210d5f-6a79-48a8-90af-ccef43549ff7","Type":"ContainerDied","Data":"76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.126981 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76569361b361649b9b03968d669a1db7558a818763522ead164f82f37051194d" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.127080 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f048-account-create-update-fnbq9" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.128922 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7qgkw" event={"ID":"5641f3e6-baea-4786-bb73-175101a77bc5","Type":"ContainerDied","Data":"ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6"} Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.128965 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8416a38fb37707ff90b7f56a8b71a870bd467924b5148e0126cadaf74d4fc6" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.129063 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7qgkw" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.147617 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-lw6v4" podStartSLOduration=5.804390753 podStartE2EDuration="11.147603319s" podCreationTimestamp="2026-01-26 09:24:18 +0000 UTC" firstStartedPulling="2026-01-26 09:24:23.188079307 +0000 UTC m=+1091.836751126" lastFinishedPulling="2026-01-26 09:24:28.531291873 +0000 UTC m=+1097.179963692" observedRunningTime="2026-01-26 09:24:29.141897615 +0000 UTC m=+1097.790569454" watchObservedRunningTime="2026-01-26 09:24:29.147603319 +0000 UTC m=+1097.796275138" Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.868188 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sjsvm-config-bkwr5"] Jan 26 09:24:29 crc kubenswrapper[4827]: I0126 09:24:29.875449 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sjsvm-config-bkwr5"] Jan 26 09:24:31 crc kubenswrapper[4827]: I0126 09:24:31.145557 4827 generic.go:334] "Generic (PLEG): container finished" podID="bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" containerID="061d7e07c7f95da73bfbe962cfae08fba1a231fc1b653b6e25e09b468d261e8a" exitCode=0 Jan 26 09:24:31 crc kubenswrapper[4827]: I0126 09:24:31.145665 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhkcw" event={"ID":"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f","Type":"ContainerDied","Data":"061d7e07c7f95da73bfbe962cfae08fba1a231fc1b653b6e25e09b468d261e8a"} Jan 26 09:24:31 crc kubenswrapper[4827]: I0126 09:24:31.716739 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8bf5b-01f3-46d7-95d7-16a7c74412bb" path="/var/lib/kubelet/pods/cea8bf5b-01f3-46d7-95d7-16a7c74412bb/volumes" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.163891 4827 generic.go:334] "Generic (PLEG): container finished" podID="203375e2-091f-4589-a8a6-e12f7af8a24d" containerID="ecdd15148e8426f5eb56adfd2f83fa8d4461abe94fb559cd7ead41ebac5fc4d9" exitCode=0 Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.164744 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lw6v4" event={"ID":"203375e2-091f-4589-a8a6-e12f7af8a24d","Type":"ContainerDied","Data":"ecdd15148e8426f5eb56adfd2f83fa8d4461abe94fb559cd7ead41ebac5fc4d9"} Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.540088 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.622928 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data\") pod \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.622995 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data\") pod \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.623021 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle\") pod \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.623076 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk9mt\" (UniqueName: \"kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt\") pod \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\" (UID: \"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f\") " Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.628791 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" (UID: "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.628808 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt" (OuterVolumeSpecName: "kube-api-access-vk9mt") pod "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" (UID: "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f"). InnerVolumeSpecName "kube-api-access-vk9mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.648541 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" (UID: "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.667221 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data" (OuterVolumeSpecName: "config-data") pod "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" (UID: "bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.725525 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.725826 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.725908 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk9mt\" (UniqueName: \"kubernetes.io/projected/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-kube-api-access-vk9mt\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:32 crc kubenswrapper[4827]: I0126 09:24:32.725988 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.173838 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhkcw" event={"ID":"bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f","Type":"ContainerDied","Data":"4932b0f00b34b317eef618d8022deafc96eae63d53fb3ff2c450865fe3dfd490"} Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.173885 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4932b0f00b34b317eef618d8022deafc96eae63d53fb3ff2c450865fe3dfd490" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.173856 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhkcw" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.379361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.539423 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle\") pod \"203375e2-091f-4589-a8a6-e12f7af8a24d\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.539559 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data\") pod \"203375e2-091f-4589-a8a6-e12f7af8a24d\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.539600 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hlx\" (UniqueName: \"kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx\") pod \"203375e2-091f-4589-a8a6-e12f7af8a24d\" (UID: \"203375e2-091f-4589-a8a6-e12f7af8a24d\") " Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.549123 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx" (OuterVolumeSpecName: "kube-api-access-75hlx") pod "203375e2-091f-4589-a8a6-e12f7af8a24d" (UID: "203375e2-091f-4589-a8a6-e12f7af8a24d"). InnerVolumeSpecName "kube-api-access-75hlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.568497 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "203375e2-091f-4589-a8a6-e12f7af8a24d" (UID: "203375e2-091f-4589-a8a6-e12f7af8a24d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.609755 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610071 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1191821d-a29f-4a52-ae1a-29659e28f5dc" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610082 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1191821d-a29f-4a52-ae1a-29659e28f5dc" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610096 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" containerName="glance-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610103 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" containerName="glance-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610115 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea8bf5b-01f3-46d7-95d7-16a7c74412bb" containerName="ovn-config" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610122 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea8bf5b-01f3-46d7-95d7-16a7c74412bb" containerName="ovn-config" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610132 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610138 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610151 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134e2635-96c4-478c-85c3-1bf9b44ad38a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610159 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="134e2635-96c4-478c-85c3-1bf9b44ad38a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610168 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5641f3e6-baea-4786-bb73-175101a77bc5" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610174 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5641f3e6-baea-4786-bb73-175101a77bc5" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610184 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6985b23f-09b2-473e-bdbf-b0c115a93ca0" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610189 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6985b23f-09b2-473e-bdbf-b0c115a93ca0" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610199 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610205 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610213 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a210d5f-6a79-48a8-90af-ccef43549ff7" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610218 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a210d5f-6a79-48a8-90af-ccef43549ff7" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: E0126 09:24:33.610226 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203375e2-091f-4589-a8a6-e12f7af8a24d" containerName="keystone-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.610231 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="203375e2-091f-4589-a8a6-e12f7af8a24d" containerName="keystone-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.614920 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a210d5f-6a79-48a8-90af-ccef43549ff7" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.614963 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="134e2635-96c4-478c-85c3-1bf9b44ad38a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.614981 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5641f3e6-baea-4786-bb73-175101a77bc5" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615001 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="203375e2-091f-4589-a8a6-e12f7af8a24d" containerName="keystone-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615010 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615021 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" containerName="glance-db-sync" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615031 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea8bf5b-01f3-46d7-95d7-16a7c74412bb" containerName="ovn-config" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615042 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6985b23f-09b2-473e-bdbf-b0c115a93ca0" containerName="mariadb-account-create-update" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615050 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1191821d-a29f-4a52-ae1a-29659e28f5dc" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.615057 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" containerName="mariadb-database-create" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.616142 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.634126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data" (OuterVolumeSpecName: "config-data") pod "203375e2-091f-4589-a8a6-e12f7af8a24d" (UID: "203375e2-091f-4589-a8a6-e12f7af8a24d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.637606 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643355 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzj4r\" (UniqueName: \"kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643453 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643477 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643539 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643623 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643656 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hlx\" (UniqueName: \"kubernetes.io/projected/203375e2-091f-4589-a8a6-e12f7af8a24d-kube-api-access-75hlx\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.643671 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203375e2-091f-4589-a8a6-e12f7af8a24d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.744525 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.744792 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.744834 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.744852 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.745312 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.745453 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.745558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzj4r\" (UniqueName: \"kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.746098 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.746378 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.761704 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzj4r\" (UniqueName: \"kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r\") pod \"dnsmasq-dns-57768dd7b5-t6d84\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:33 crc kubenswrapper[4827]: I0126 09:24:33.978778 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.185978 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lw6v4" event={"ID":"203375e2-091f-4589-a8a6-e12f7af8a24d","Type":"ContainerDied","Data":"2bc519ab8bc27d6975b7276bea9ed7f904d47b9d204801d1f42eeacdf880ddcb"} Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.186016 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc519ab8bc27d6975b7276bea9ed7f904d47b9d204801d1f42eeacdf880ddcb" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.186030 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lw6v4" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.370273 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.408008 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.409304 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.426101 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.457987 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.458034 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.458062 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrdn\" (UniqueName: \"kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.458104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.458126 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.471561 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jk4l5"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.475217 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.478757 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.479256 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.479499 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.479610 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.479748 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cg6fq" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.499088 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jk4l5"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559492 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559575 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kj4k\" (UniqueName: \"kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559721 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559746 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559770 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559803 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559830 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrdn\" (UniqueName: \"kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.559877 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.561002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.561268 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.564264 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.564597 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.566514 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.630347 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrdn\" (UniqueName: \"kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn\") pod \"dnsmasq-dns-78fbc4bbf-5fwd2\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.660867 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.660923 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.660961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.661056 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kj4k\" (UniqueName: \"kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.661095 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.661124 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.671718 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.672939 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.680256 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.680429 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.680658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.700722 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kj4k\" (UniqueName: \"kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k\") pod \"keystone-bootstrap-jk4l5\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.725114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.814119 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.825309 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.860096 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869101 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869174 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869238 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869257 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869288 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869323 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.869340 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcwc\" (UniqueName: \"kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.877286 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.877527 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.914831 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970424 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970474 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970514 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970552 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970571 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhcwc\" (UniqueName: \"kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970612 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.970661 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.974139 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.974311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.991800 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.992090 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.992414 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:34 crc kubenswrapper[4827]: I0126 09:24:34.995410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.048622 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhcwc\" (UniqueName: \"kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc\") pod \"ceilometer-0\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " pod="openstack/ceilometer-0" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.048744 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tjnkp"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.049975 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.063171 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.063376 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.063479 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fxhkx" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.075973 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.076079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpzps\" (UniqueName: \"kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.076115 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.076392 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bpxp5"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.077443 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.089738 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tjnkp"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.103753 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bpxp5"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.106228 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.106458 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.106663 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c886m" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.189558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.189655 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzps\" (UniqueName: \"kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.189683 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.205431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.212273 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lwd9n"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.214544 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.215365 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.217722 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.231386 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" event={"ID":"89fcc075-6724-447c-a7f9-6dbe4934fa2d","Type":"ContainerStarted","Data":"9e1c1de1278791358ea0644b48968ba3a4816752d580fc63b9559c2224b978bb"} Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.231604 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" podUID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" containerName="init" containerID="cri-o://9e1c1de1278791358ea0644b48968ba3a4816752d580fc63b9559c2224b978bb" gracePeriod=10 Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.231439 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" event={"ID":"89fcc075-6724-447c-a7f9-6dbe4934fa2d","Type":"ContainerStarted","Data":"1eaecd90dfe131b457ae69be71bece516c0271d7c0de1c089972aeb2bbd9a2d0"} Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.232495 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5qpq5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.233063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzps\" (UniqueName: \"kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps\") pod \"neutron-db-sync-tjnkp\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.237066 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.244596 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lwd9n"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.287832 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294259 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gj7t\" (UniqueName: \"kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294313 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294335 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294352 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4vk\" (UniqueName: \"kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294390 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294427 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294457 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294484 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.294516 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.350723 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s6fvd"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.354435 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.383597 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.383833 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.384443 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w7cfb" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.384496 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s6fvd"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396173 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gj7t\" (UniqueName: \"kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396223 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396265 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396290 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4vk\" (UniqueName: \"kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396334 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396357 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396389 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396440 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.396474 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.398735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.417910 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.421447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.422195 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.422697 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.434970 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4vk\" (UniqueName: \"kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.437032 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data\") pod \"barbican-db-sync-lwd9n\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.438598 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.441239 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.459079 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.463198 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gj7t\" (UniqueName: \"kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.470475 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data\") pod \"cinder-db-sync-bpxp5\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.489681 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500433 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79ts\" (UniqueName: \"kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500469 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500487 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500515 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500606 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500650 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbkk\" (UniqueName: \"kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500678 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.500692 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.551624 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.608832 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.608919 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.608943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.608957 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.608987 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbkk\" (UniqueName: \"kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.609031 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.609047 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.609074 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79ts\" (UniqueName: \"kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.609089 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.609107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.614954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.619409 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.620110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.620422 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.621116 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.664836 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.665595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.666115 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.685978 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79ts\" (UniqueName: \"kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts\") pod \"dnsmasq-dns-5d87b7c6dc-4blrd\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.689015 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbkk\" (UniqueName: \"kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk\") pod \"placement-db-sync-s6fvd\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.702234 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.722663 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.730565 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.754476 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6fvd" Jan 26 09:24:35 crc kubenswrapper[4827]: I0126 09:24:35.804126 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jk4l5"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.183856 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.264209 4827 generic.go:334] "Generic (PLEG): container finished" podID="fb423745-95c8-483c-bfe4-bc7e57ccbe02" containerID="004b53732e8f25f570fcbd7a894e9da2da9b125a45679428638bf2667965a8e6" exitCode=0 Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.264278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" event={"ID":"fb423745-95c8-483c-bfe4-bc7e57ccbe02","Type":"ContainerDied","Data":"004b53732e8f25f570fcbd7a894e9da2da9b125a45679428638bf2667965a8e6"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.264303 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" event={"ID":"fb423745-95c8-483c-bfe4-bc7e57ccbe02","Type":"ContainerStarted","Data":"086e46811819893f698109723197c72ca02685789caf6d8c3f70f60d5929e1b3"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.272088 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk4l5" event={"ID":"94433d65-9e54-4812-ac43-ee89c91d3506","Type":"ContainerStarted","Data":"31bdf708d5b0eacca7094ada130a76850591ac86386f7e16fd5a6d231f186e89"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.272122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk4l5" event={"ID":"94433d65-9e54-4812-ac43-ee89c91d3506","Type":"ContainerStarted","Data":"d22eaef24f4344b355f249d279f50b965cd6e52c22dff6d7aa6c7e2a767db402"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.274933 4827 generic.go:334] "Generic (PLEG): container finished" podID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" containerID="9e1c1de1278791358ea0644b48968ba3a4816752d580fc63b9559c2224b978bb" exitCode=0 Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.274973 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" event={"ID":"89fcc075-6724-447c-a7f9-6dbe4934fa2d","Type":"ContainerDied","Data":"9e1c1de1278791358ea0644b48968ba3a4816752d580fc63b9559c2224b978bb"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.277509 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerStarted","Data":"093721ddf83938312a017f4e883fbbd5bcece1621b76638639f82a684b85d35d"} Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.315093 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jk4l5" podStartSLOduration=2.315078397 podStartE2EDuration="2.315078397s" podCreationTimestamp="2026-01-26 09:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:36.31187418 +0000 UTC m=+1104.960545999" watchObservedRunningTime="2026-01-26 09:24:36.315078397 +0000 UTC m=+1104.963750216" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.413023 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tjnkp"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.441613 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.545910 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lwd9n"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.557600 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.567022 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb\") pod \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.567159 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc\") pod \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.578225 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb\") pod \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.578519 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzj4r\" (UniqueName: \"kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r\") pod \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.578598 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config\") pod \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\" (UID: \"89fcc075-6724-447c-a7f9-6dbe4934fa2d\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.610790 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r" (OuterVolumeSpecName: "kube-api-access-wzj4r") pod "89fcc075-6724-447c-a7f9-6dbe4934fa2d" (UID: "89fcc075-6724-447c-a7f9-6dbe4934fa2d"). InnerVolumeSpecName "kube-api-access-wzj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.628465 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bpxp5"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.657825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89fcc075-6724-447c-a7f9-6dbe4934fa2d" (UID: "89fcc075-6724-447c-a7f9-6dbe4934fa2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.694579 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config" (OuterVolumeSpecName: "config") pod "89fcc075-6724-447c-a7f9-6dbe4934fa2d" (UID: "89fcc075-6724-447c-a7f9-6dbe4934fa2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.694885 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89fcc075-6724-447c-a7f9-6dbe4934fa2d" (UID: "89fcc075-6724-447c-a7f9-6dbe4934fa2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.708488 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.708828 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.708838 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzj4r\" (UniqueName: \"kubernetes.io/projected/89fcc075-6724-447c-a7f9-6dbe4934fa2d-kube-api-access-wzj4r\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.708848 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.720447 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89fcc075-6724-447c-a7f9-6dbe4934fa2d" (UID: "89fcc075-6724-447c-a7f9-6dbe4934fa2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.744447 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s6fvd"] Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.810717 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89fcc075-6724-447c-a7f9-6dbe4934fa2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.821461 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.911417 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb\") pod \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.911488 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc\") pod \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.911509 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb\") pod \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.911543 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config\") pod \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.911707 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrdn\" (UniqueName: \"kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn\") pod \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\" (UID: \"fb423745-95c8-483c-bfe4-bc7e57ccbe02\") " Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.922470 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn" (OuterVolumeSpecName: "kube-api-access-nkrdn") pod "fb423745-95c8-483c-bfe4-bc7e57ccbe02" (UID: "fb423745-95c8-483c-bfe4-bc7e57ccbe02"). InnerVolumeSpecName "kube-api-access-nkrdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.940136 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb423745-95c8-483c-bfe4-bc7e57ccbe02" (UID: "fb423745-95c8-483c-bfe4-bc7e57ccbe02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.945216 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb423745-95c8-483c-bfe4-bc7e57ccbe02" (UID: "fb423745-95c8-483c-bfe4-bc7e57ccbe02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.947469 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config" (OuterVolumeSpecName: "config") pod "fb423745-95c8-483c-bfe4-bc7e57ccbe02" (UID: "fb423745-95c8-483c-bfe4-bc7e57ccbe02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:36 crc kubenswrapper[4827]: I0126 09:24:36.982038 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb423745-95c8-483c-bfe4-bc7e57ccbe02" (UID: "fb423745-95c8-483c-bfe4-bc7e57ccbe02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.013716 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.013746 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.013774 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.013783 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrdn\" (UniqueName: \"kubernetes.io/projected/fb423745-95c8-483c-bfe4-bc7e57ccbe02-kube-api-access-nkrdn\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.013845 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb423745-95c8-483c-bfe4-bc7e57ccbe02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.341474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" event={"ID":"fb423745-95c8-483c-bfe4-bc7e57ccbe02","Type":"ContainerDied","Data":"086e46811819893f698109723197c72ca02685789caf6d8c3f70f60d5929e1b3"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.342447 4827 scope.go:117] "RemoveContainer" containerID="004b53732e8f25f570fcbd7a894e9da2da9b125a45679428638bf2667965a8e6" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.342631 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-5fwd2" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.366263 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" event={"ID":"89fcc075-6724-447c-a7f9-6dbe4934fa2d","Type":"ContainerDied","Data":"1eaecd90dfe131b457ae69be71bece516c0271d7c0de1c089972aeb2bbd9a2d0"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.374048 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-t6d84" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.399920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6fvd" event={"ID":"6ed11b79-49ca-4b9a-9ebc-413bb8032271","Type":"ContainerStarted","Data":"e95a08f1d4fb80122d7e98df48f83c24bbecf99f60391c6906ebdd65709c431e"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.406947 4827 generic.go:334] "Generic (PLEG): container finished" podID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerID="cd23abdd3cbcecdae4dde34202c563de68b4a0543fc63e54161d6aad1ba72071" exitCode=0 Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.407008 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" event={"ID":"5b6b8b42-302b-4d1f-af82-973aeed6e0a9","Type":"ContainerDied","Data":"cd23abdd3cbcecdae4dde34202c563de68b4a0543fc63e54161d6aad1ba72071"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.407038 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" event={"ID":"5b6b8b42-302b-4d1f-af82-973aeed6e0a9","Type":"ContainerStarted","Data":"dfd9b1238497611f9d43eee622faa26f5e6852c6f7f85407ab004b5c86061806"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.420608 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lwd9n" event={"ID":"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36","Type":"ContainerStarted","Data":"3eb24f441e9e26bbe97d88532431b6a99b3480c4b1fc4513a1ed44d71a474dab"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.430387 4827 scope.go:117] "RemoveContainer" containerID="9e1c1de1278791358ea0644b48968ba3a4816752d580fc63b9559c2224b978bb" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.458085 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.462674 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tjnkp" event={"ID":"cb9f773f-e44d-4773-824f-dde5313c3c26","Type":"ContainerStarted","Data":"a6cd996dc778f0218de0a8800e717d3d095a508c2c51b6d3031f8dd5c2deeab5"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.462718 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tjnkp" event={"ID":"cb9f773f-e44d-4773-824f-dde5313c3c26","Type":"ContainerStarted","Data":"b522ad6658a35c7b79b1b65c94ac7e8885bf2d139a9bfa3f97bb391592ad6394"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.483116 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bpxp5" event={"ID":"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66","Type":"ContainerStarted","Data":"a184984f770bfcfabacec2064d36c1eadd208a5e680c838210c5d16021c972d7"} Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.724843 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tjnkp" podStartSLOduration=3.724824281 podStartE2EDuration="3.724824281s" podCreationTimestamp="2026-01-26 09:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:37.522387024 +0000 UTC m=+1106.171058843" watchObservedRunningTime="2026-01-26 09:24:37.724824281 +0000 UTC m=+1106.373496100" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.762748 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.801872 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-5fwd2"] Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.844169 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:37 crc kubenswrapper[4827]: E0126 09:24:37.846438 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb423745_95c8_483c_bfe4_bc7e57ccbe02.slice/crio-086e46811819893f698109723197c72ca02685789caf6d8c3f70f60d5929e1b3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89fcc075_6724_447c_a7f9_6dbe4934fa2d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89fcc075_6724_447c_a7f9_6dbe4934fa2d.slice/crio-1eaecd90dfe131b457ae69be71bece516c0271d7c0de1c089972aeb2bbd9a2d0\": RecentStats: unable to find data in memory cache]" Jan 26 09:24:37 crc kubenswrapper[4827]: I0126 09:24:37.850102 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-t6d84"] Jan 26 09:24:38 crc kubenswrapper[4827]: I0126 09:24:38.502065 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" event={"ID":"5b6b8b42-302b-4d1f-af82-973aeed6e0a9","Type":"ContainerStarted","Data":"cfa2c3c108b503ab81b2fe989e0f61de3e3dd0094852d8d0f452a8426eceb16c"} Jan 26 09:24:38 crc kubenswrapper[4827]: I0126 09:24:38.503169 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:38 crc kubenswrapper[4827]: I0126 09:24:38.531920 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" podStartSLOduration=3.531900459 podStartE2EDuration="3.531900459s" podCreationTimestamp="2026-01-26 09:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:24:38.530236025 +0000 UTC m=+1107.178907844" watchObservedRunningTime="2026-01-26 09:24:38.531900459 +0000 UTC m=+1107.180572288" Jan 26 09:24:39 crc kubenswrapper[4827]: I0126 09:24:39.720431 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" path="/var/lib/kubelet/pods/89fcc075-6724-447c-a7f9-6dbe4934fa2d/volumes" Jan 26 09:24:39 crc kubenswrapper[4827]: I0126 09:24:39.721026 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb423745-95c8-483c-bfe4-bc7e57ccbe02" path="/var/lib/kubelet/pods/fb423745-95c8-483c-bfe4-bc7e57ccbe02/volumes" Jan 26 09:24:41 crc kubenswrapper[4827]: I0126 09:24:41.548459 4827 generic.go:334] "Generic (PLEG): container finished" podID="94433d65-9e54-4812-ac43-ee89c91d3506" containerID="31bdf708d5b0eacca7094ada130a76850591ac86386f7e16fd5a6d231f186e89" exitCode=0 Jan 26 09:24:41 crc kubenswrapper[4827]: I0126 09:24:41.548522 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk4l5" event={"ID":"94433d65-9e54-4812-ac43-ee89c91d3506","Type":"ContainerDied","Data":"31bdf708d5b0eacca7094ada130a76850591ac86386f7e16fd5a6d231f186e89"} Jan 26 09:24:42 crc kubenswrapper[4827]: I0126 09:24:42.269093 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:24:42 crc kubenswrapper[4827]: I0126 09:24:42.269162 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:24:45 crc kubenswrapper[4827]: I0126 09:24:45.712592 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:24:45 crc kubenswrapper[4827]: I0126 09:24:45.803666 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:24:45 crc kubenswrapper[4827]: I0126 09:24:45.803888 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" containerID="cri-o://a94be0fa5f974286a7c44481a06d9a2e3ea587f1e40c5605d4f4341db4aea0a1" gracePeriod=10 Jan 26 09:24:46 crc kubenswrapper[4827]: I0126 09:24:46.601591 4827 generic.go:334] "Generic (PLEG): container finished" podID="5f9cc942-f402-4e73-b974-c61b05650876" containerID="a94be0fa5f974286a7c44481a06d9a2e3ea587f1e40c5605d4f4341db4aea0a1" exitCode=0 Jan 26 09:24:46 crc kubenswrapper[4827]: I0126 09:24:46.601953 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" event={"ID":"5f9cc942-f402-4e73-b974-c61b05650876","Type":"ContainerDied","Data":"a94be0fa5f974286a7c44481a06d9a2e3ea587f1e40c5605d4f4341db4aea0a1"} Jan 26 09:24:47 crc kubenswrapper[4827]: I0126 09:24:47.509824 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 26 09:24:52 crc kubenswrapper[4827]: I0126 09:24:52.509364 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 26 09:24:57 crc kubenswrapper[4827]: I0126 09:24:57.510001 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 26 09:24:57 crc kubenswrapper[4827]: I0126 09:24:57.510577 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:24:59 crc kubenswrapper[4827]: E0126 09:24:59.702271 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 26 09:24:59 crc kubenswrapper[4827]: E0126 09:24:59.702915 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gj7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-bpxp5_openstack(8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:24:59 crc kubenswrapper[4827]: E0126 09:24:59.704316 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-bpxp5" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.706511 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.719507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk4l5" event={"ID":"94433d65-9e54-4812-ac43-ee89c91d3506","Type":"ContainerDied","Data":"d22eaef24f4344b355f249d279f50b965cd6e52c22dff6d7aa6c7e2a767db402"} Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.719552 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22eaef24f4344b355f249d279f50b965cd6e52c22dff6d7aa6c7e2a767db402" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856067 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856205 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856228 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856255 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kj4k\" (UniqueName: \"kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856337 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.856416 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle\") pod \"94433d65-9e54-4812-ac43-ee89c91d3506\" (UID: \"94433d65-9e54-4812-ac43-ee89c91d3506\") " Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.902856 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k" (OuterVolumeSpecName: "kube-api-access-8kj4k") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "kube-api-access-8kj4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.908954 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.913589 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts" (OuterVolumeSpecName: "scripts") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.951812 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.952854 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data" (OuterVolumeSpecName: "config-data") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.966793 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.966831 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.966843 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kj4k\" (UniqueName: \"kubernetes.io/projected/94433d65-9e54-4812-ac43-ee89c91d3506-kube-api-access-8kj4k\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.966857 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.966869 4827 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:59 crc kubenswrapper[4827]: I0126 09:24:59.993821 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94433d65-9e54-4812-ac43-ee89c91d3506" (UID: "94433d65-9e54-4812-ac43-ee89c91d3506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.068594 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94433d65-9e54-4812-ac43-ee89c91d3506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.721993 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk4l5" Jan 26 09:25:00 crc kubenswrapper[4827]: E0126 09:25:00.731072 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-bpxp5" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.801910 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jk4l5"] Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.819151 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jk4l5"] Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.901944 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hm5j4"] Jan 26 09:25:00 crc kubenswrapper[4827]: E0126 09:25:00.902370 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94433d65-9e54-4812-ac43-ee89c91d3506" containerName="keystone-bootstrap" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902394 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="94433d65-9e54-4812-ac43-ee89c91d3506" containerName="keystone-bootstrap" Jan 26 09:25:00 crc kubenswrapper[4827]: E0126 09:25:00.902416 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902424 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: E0126 09:25:00.902434 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb423745-95c8-483c-bfe4-bc7e57ccbe02" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902441 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb423745-95c8-483c-bfe4-bc7e57ccbe02" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902631 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="94433d65-9e54-4812-ac43-ee89c91d3506" containerName="keystone-bootstrap" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902662 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fcc075-6724-447c-a7f9-6dbe4934fa2d" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.902677 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb423745-95c8-483c-bfe4-bc7e57ccbe02" containerName="init" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.903716 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.911929 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.912022 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.911942 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.912285 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.912479 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cg6fq" Jan 26 09:25:00 crc kubenswrapper[4827]: I0126 09:25:00.920924 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hm5j4"] Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.085595 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.085805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.085853 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6wf\" (UniqueName: \"kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.085884 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.085975 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.086041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.188223 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.192703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.192735 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.192796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.193019 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.193079 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6wf\" (UniqueName: \"kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.201092 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.201803 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.209690 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.212016 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.220566 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.226084 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6wf\" (UniqueName: \"kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf\") pod \"keystone-bootstrap-hm5j4\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.229157 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:01 crc kubenswrapper[4827]: I0126 09:25:01.714026 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94433d65-9e54-4812-ac43-ee89c91d3506" path="/var/lib/kubelet/pods/94433d65-9e54-4812-ac43-ee89c91d3506/volumes" Jan 26 09:25:02 crc kubenswrapper[4827]: I0126 09:25:02.745199 4827 generic.go:334] "Generic (PLEG): container finished" podID="cb9f773f-e44d-4773-824f-dde5313c3c26" containerID="a6cd996dc778f0218de0a8800e717d3d095a508c2c51b6d3031f8dd5c2deeab5" exitCode=0 Jan 26 09:25:02 crc kubenswrapper[4827]: I0126 09:25:02.745244 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tjnkp" event={"ID":"cb9f773f-e44d-4773-824f-dde5313c3c26","Type":"ContainerDied","Data":"a6cd996dc778f0218de0a8800e717d3d095a508c2c51b6d3031f8dd5c2deeab5"} Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.323521 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.324279 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grbkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-s6fvd_openstack(6ed11b79-49ca-4b9a-9ebc-413bb8032271): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.325714 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-s6fvd" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.403537 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.416975 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547532 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6vbq\" (UniqueName: \"kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq\") pod \"5f9cc942-f402-4e73-b974-c61b05650876\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547622 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpzps\" (UniqueName: \"kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps\") pod \"cb9f773f-e44d-4773-824f-dde5313c3c26\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547665 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb\") pod \"5f9cc942-f402-4e73-b974-c61b05650876\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547695 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc\") pod \"5f9cc942-f402-4e73-b974-c61b05650876\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547739 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle\") pod \"cb9f773f-e44d-4773-824f-dde5313c3c26\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547776 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config\") pod \"cb9f773f-e44d-4773-824f-dde5313c3c26\" (UID: \"cb9f773f-e44d-4773-824f-dde5313c3c26\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547821 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb\") pod \"5f9cc942-f402-4e73-b974-c61b05650876\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.547941 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config\") pod \"5f9cc942-f402-4e73-b974-c61b05650876\" (UID: \"5f9cc942-f402-4e73-b974-c61b05650876\") " Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.556929 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq" (OuterVolumeSpecName: "kube-api-access-w6vbq") pod "5f9cc942-f402-4e73-b974-c61b05650876" (UID: "5f9cc942-f402-4e73-b974-c61b05650876"). InnerVolumeSpecName "kube-api-access-w6vbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.557257 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps" (OuterVolumeSpecName: "kube-api-access-hpzps") pod "cb9f773f-e44d-4773-824f-dde5313c3c26" (UID: "cb9f773f-e44d-4773-824f-dde5313c3c26"). InnerVolumeSpecName "kube-api-access-hpzps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.592117 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb9f773f-e44d-4773-824f-dde5313c3c26" (UID: "cb9f773f-e44d-4773-824f-dde5313c3c26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.611755 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config" (OuterVolumeSpecName: "config") pod "cb9f773f-e44d-4773-824f-dde5313c3c26" (UID: "cb9f773f-e44d-4773-824f-dde5313c3c26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.619813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f9cc942-f402-4e73-b974-c61b05650876" (UID: "5f9cc942-f402-4e73-b974-c61b05650876"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.620208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f9cc942-f402-4e73-b974-c61b05650876" (UID: "5f9cc942-f402-4e73-b974-c61b05650876"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.641027 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f9cc942-f402-4e73-b974-c61b05650876" (UID: "5f9cc942-f402-4e73-b974-c61b05650876"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649358 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6vbq\" (UniqueName: \"kubernetes.io/projected/5f9cc942-f402-4e73-b974-c61b05650876-kube-api-access-w6vbq\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649389 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpzps\" (UniqueName: \"kubernetes.io/projected/cb9f773f-e44d-4773-824f-dde5313c3c26-kube-api-access-hpzps\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649399 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649409 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649419 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649426 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb9f773f-e44d-4773-824f-dde5313c3c26-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.649436 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.662106 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config" (OuterVolumeSpecName: "config") pod "5f9cc942-f402-4e73-b974-c61b05650876" (UID: "5f9cc942-f402-4e73-b974-c61b05650876"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.751258 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9cc942-f402-4e73-b974-c61b05650876-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.764603 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tjnkp" event={"ID":"cb9f773f-e44d-4773-824f-dde5313c3c26","Type":"ContainerDied","Data":"b522ad6658a35c7b79b1b65c94ac7e8885bf2d139a9bfa3f97bb391592ad6394"} Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.764632 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tjnkp" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.764659 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b522ad6658a35c7b79b1b65c94ac7e8885bf2d139a9bfa3f97bb391592ad6394" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.770279 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.772430 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" event={"ID":"5f9cc942-f402-4e73-b974-c61b05650876","Type":"ContainerDied","Data":"82526a99f51e1520ec28a7847b469f413164dc963faaa66c493d33de1f16f7a3"} Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.772509 4827 scope.go:117] "RemoveContainer" containerID="a94be0fa5f974286a7c44481a06d9a2e3ea587f1e40c5605d4f4341db4aea0a1" Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.784036 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-s6fvd" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.852720 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.859164 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bp7wz"] Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.960230 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.960739 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9f773f-e44d-4773-824f-dde5313c3c26" containerName="neutron-db-sync" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.960758 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9f773f-e44d-4773-824f-dde5313c3c26" containerName="neutron-db-sync" Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.960777 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="init" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.960785 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="init" Jan 26 09:25:04 crc kubenswrapper[4827]: E0126 09:25:04.960798 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.960806 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.961005 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9f773f-e44d-4773-824f-dde5313c3c26" containerName="neutron-db-sync" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.961019 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.962041 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:04 crc kubenswrapper[4827]: I0126 09:25:04.986791 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.059784 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk4x\" (UniqueName: \"kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.059899 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.059986 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.060060 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.060087 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.152660 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.158137 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.162826 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fxhkx" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163065 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163230 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163292 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk4x\" (UniqueName: \"kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163416 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163469 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.163617 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.164414 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.164887 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.165360 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.165558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.170269 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.176548 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.198406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk4x\" (UniqueName: \"kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x\") pod \"dnsmasq-dns-548894858c-6mmz8\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.264895 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.265205 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.265243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.265267 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.265287 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fnwm\" (UniqueName: \"kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.303814 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.366832 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.366901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.366943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.366978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.366995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fnwm\" (UniqueName: \"kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.375563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.382439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.403558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.405042 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.415498 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fnwm\" (UniqueName: \"kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm\") pod \"neutron-669c664556-xn8st\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.487716 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.711699 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9cc942-f402-4e73-b974-c61b05650876" path="/var/lib/kubelet/pods/5f9cc942-f402-4e73-b974-c61b05650876/volumes" Jan 26 09:25:05 crc kubenswrapper[4827]: E0126 09:25:05.934534 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 26 09:25:05 crc kubenswrapper[4827]: E0126 09:25:05.934708 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tq4vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lwd9n_openstack(a0129b71-c166-4c4d-b8e9-c7f1f1acdd36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 09:25:05 crc kubenswrapper[4827]: E0126 09:25:05.936016 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lwd9n" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" Jan 26 09:25:05 crc kubenswrapper[4827]: I0126 09:25:05.967987 4827 scope.go:117] "RemoveContainer" containerID="60cdb241ec594632ef112239e486eb652c0ad1f9bb7ba6b168fb2c96357c3cd3" Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.524889 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hm5j4"] Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.631007 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:25:06 crc kubenswrapper[4827]: W0126 09:25:06.651274 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff2c416_a1be_4c3b_a73e_8c779a9cfb73.slice/crio-8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475 WatchSource:0}: Error finding container 8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475: Status 404 returned error can't find the container with id 8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475 Jan 26 09:25:06 crc kubenswrapper[4827]: W0126 09:25:06.693142 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcea11c1d_cffb_4de8_ada4_c74439fa04c5.slice/crio-86e70988960f5d7af89b51e2b7882398769312e0052ec1fdc3691b320c1eda3f WatchSource:0}: Error finding container 86e70988960f5d7af89b51e2b7882398769312e0052ec1fdc3691b320c1eda3f: Status 404 returned error can't find the container with id 86e70988960f5d7af89b51e2b7882398769312e0052ec1fdc3691b320c1eda3f Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.705052 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.813962 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerStarted","Data":"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147"} Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.816612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerStarted","Data":"8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475"} Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.817711 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-6mmz8" event={"ID":"cea11c1d-cffb-4de8-ada4-c74439fa04c5","Type":"ContainerStarted","Data":"86e70988960f5d7af89b51e2b7882398769312e0052ec1fdc3691b320c1eda3f"} Jan 26 09:25:06 crc kubenswrapper[4827]: I0126 09:25:06.819701 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hm5j4" event={"ID":"411be221-7c35-404e-9f79-7d5498fec92c","Type":"ContainerStarted","Data":"c4cdd0d54a67f9c84fa371772c8a2ce420d255706057092ac3ffaae7bc434838"} Jan 26 09:25:06 crc kubenswrapper[4827]: E0126 09:25:06.822865 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-lwd9n" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.510350 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-bp7wz" podUID="5f9cc942-f402-4e73-b974-c61b05650876" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.673117 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.674754 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.677488 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.677743 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.699254 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.828515 4827 generic.go:334] "Generic (PLEG): container finished" podID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerID="2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f" exitCode=0 Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.828584 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-6mmz8" event={"ID":"cea11c1d-cffb-4de8-ada4-c74439fa04c5","Type":"ContainerDied","Data":"2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f"} Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.835114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hm5j4" event={"ID":"411be221-7c35-404e-9f79-7d5498fec92c","Type":"ContainerStarted","Data":"f2ee19780d1808d08dd895b212b88d071a4852853161b46b8639a718cd1346ac"} Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836280 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836396 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836422 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836453 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836473 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jhf\" (UniqueName: \"kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.836512 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.839551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerStarted","Data":"56cab7b8704b3e651f9d40f0e0b77cbf71a4a53420baec52c47578fcafd29b84"} Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.839589 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerStarted","Data":"f673a28c07db6e9b4fffa5321a6f782d479a60c08355ccffbe919ebd9373a64a"} Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.840342 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.884869 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hm5j4" podStartSLOduration=7.88484575 podStartE2EDuration="7.88484575s" podCreationTimestamp="2026-01-26 09:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:07.871734955 +0000 UTC m=+1136.520406774" watchObservedRunningTime="2026-01-26 09:25:07.88484575 +0000 UTC m=+1136.533517589" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.897257 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-669c664556-xn8st" podStartSLOduration=2.897240696 podStartE2EDuration="2.897240696s" podCreationTimestamp="2026-01-26 09:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:07.896162266 +0000 UTC m=+1136.544834095" watchObservedRunningTime="2026-01-26 09:25:07.897240696 +0000 UTC m=+1136.545912505" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.937437 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.937525 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.938220 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.938298 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.938334 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.938367 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jhf\" (UniqueName: \"kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.938401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.944432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.944958 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.948596 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.948687 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.956338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.957049 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.961096 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jhf\" (UniqueName: \"kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf\") pod \"neutron-67459d4777-j9nd4\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:07 crc kubenswrapper[4827]: I0126 09:25:07.998386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:08 crc kubenswrapper[4827]: I0126 09:25:08.603530 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:08 crc kubenswrapper[4827]: I0126 09:25:08.847854 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerStarted","Data":"68aeca83f757f3d5fc90d5aead781e9d51ab328f4b277b41b0bf4f56334ef1f8"} Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.855914 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerStarted","Data":"433ab8f3e1f6c1f20eeefadd26526c3c5d5afd5fd6b59db9adea299ab847a45c"} Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.856262 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerStarted","Data":"6d12d642ac5ab0d430051ee70a591a7a7426e99a24231cad89c6e7d47149f57b"} Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.856334 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.858632 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-6mmz8" event={"ID":"cea11c1d-cffb-4de8-ada4-c74439fa04c5","Type":"ContainerStarted","Data":"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e"} Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.859874 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.860365 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerStarted","Data":"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7"} Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.923707 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67459d4777-j9nd4" podStartSLOduration=2.923686197 podStartE2EDuration="2.923686197s" podCreationTimestamp="2026-01-26 09:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:09.893945502 +0000 UTC m=+1138.542617321" watchObservedRunningTime="2026-01-26 09:25:09.923686197 +0000 UTC m=+1138.572358016" Jan 26 09:25:09 crc kubenswrapper[4827]: I0126 09:25:09.939739 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-548894858c-6mmz8" podStartSLOduration=5.939720751 podStartE2EDuration="5.939720751s" podCreationTimestamp="2026-01-26 09:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:09.938122397 +0000 UTC m=+1138.586794216" watchObservedRunningTime="2026-01-26 09:25:09.939720751 +0000 UTC m=+1138.588392570" Jan 26 09:25:12 crc kubenswrapper[4827]: I0126 09:25:12.268395 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:25:12 crc kubenswrapper[4827]: I0126 09:25:12.268976 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:25:12 crc kubenswrapper[4827]: I0126 09:25:12.891284 4827 generic.go:334] "Generic (PLEG): container finished" podID="411be221-7c35-404e-9f79-7d5498fec92c" containerID="f2ee19780d1808d08dd895b212b88d071a4852853161b46b8639a718cd1346ac" exitCode=0 Jan 26 09:25:12 crc kubenswrapper[4827]: I0126 09:25:12.891321 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hm5j4" event={"ID":"411be221-7c35-404e-9f79-7d5498fec92c","Type":"ContainerDied","Data":"f2ee19780d1808d08dd895b212b88d071a4852853161b46b8639a718cd1346ac"} Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.671696 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804473 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804546 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804575 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6wf\" (UniqueName: \"kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804673 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804749 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.804787 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys\") pod \"411be221-7c35-404e-9f79-7d5498fec92c\" (UID: \"411be221-7c35-404e-9f79-7d5498fec92c\") " Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.812984 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts" (OuterVolumeSpecName: "scripts") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.814018 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf" (OuterVolumeSpecName: "kube-api-access-5j6wf") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "kube-api-access-5j6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.814788 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.817425 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.834311 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.840360 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data" (OuterVolumeSpecName: "config-data") pod "411be221-7c35-404e-9f79-7d5498fec92c" (UID: "411be221-7c35-404e-9f79-7d5498fec92c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.906975 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.907011 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.907023 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.907053 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.907067 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6wf\" (UniqueName: \"kubernetes.io/projected/411be221-7c35-404e-9f79-7d5498fec92c-kube-api-access-5j6wf\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.907079 4827 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/411be221-7c35-404e-9f79-7d5498fec92c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.914001 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hm5j4" event={"ID":"411be221-7c35-404e-9f79-7d5498fec92c","Type":"ContainerDied","Data":"c4cdd0d54a67f9c84fa371772c8a2ce420d255706057092ac3ffaae7bc434838"} Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.914038 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4cdd0d54a67f9c84fa371772c8a2ce420d255706057092ac3ffaae7bc434838" Jan 26 09:25:14 crc kubenswrapper[4827]: I0126 09:25:14.914095 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hm5j4" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.018574 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb549d74c-6hlgt"] Jan 26 09:25:15 crc kubenswrapper[4827]: E0126 09:25:15.019089 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411be221-7c35-404e-9f79-7d5498fec92c" containerName="keystone-bootstrap" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.019114 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="411be221-7c35-404e-9f79-7d5498fec92c" containerName="keystone-bootstrap" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.019309 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="411be221-7c35-404e-9f79-7d5498fec92c" containerName="keystone-bootstrap" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.020199 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.023729 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.025758 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cg6fq" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.025808 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.026115 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.026116 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.027795 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.033732 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb549d74c-6hlgt"] Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.110958 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-scripts\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111026 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-credential-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111107 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-internal-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111153 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-fernet-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111181 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-public-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111204 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-combined-ca-bundle\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111244 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-config-data\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.111311 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5f45\" (UniqueName: \"kubernetes.io/projected/053973de-195d-44a8-ba9f-d665b8a53c87-kube-api-access-x5f45\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212311 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5f45\" (UniqueName: \"kubernetes.io/projected/053973de-195d-44a8-ba9f-d665b8a53c87-kube-api-access-x5f45\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212381 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-scripts\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-credential-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212494 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-internal-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212525 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-fernet-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212556 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-public-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-combined-ca-bundle\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.212611 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-config-data\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.217029 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-credential-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.218490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-fernet-keys\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.218870 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-scripts\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.218534 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-config-data\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.219300 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-public-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.219711 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-internal-tls-certs\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.230817 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5f45\" (UniqueName: \"kubernetes.io/projected/053973de-195d-44a8-ba9f-d665b8a53c87-kube-api-access-x5f45\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.232610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053973de-195d-44a8-ba9f-d665b8a53c87-combined-ca-bundle\") pod \"keystone-bb549d74c-6hlgt\" (UID: \"053973de-195d-44a8-ba9f-d665b8a53c87\") " pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.305772 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.346033 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.405831 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.406138 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="dnsmasq-dns" containerID="cri-o://cfa2c3c108b503ab81b2fe989e0f61de3e3dd0094852d8d0f452a8426eceb16c" gracePeriod=10 Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.705172 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.930066 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerStarted","Data":"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57"} Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.932909 4827 generic.go:334] "Generic (PLEG): container finished" podID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerID="cfa2c3c108b503ab81b2fe989e0f61de3e3dd0094852d8d0f452a8426eceb16c" exitCode=0 Jan 26 09:25:15 crc kubenswrapper[4827]: I0126 09:25:15.932944 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" event={"ID":"5b6b8b42-302b-4d1f-af82-973aeed6e0a9","Type":"ContainerDied","Data":"cfa2c3c108b503ab81b2fe989e0f61de3e3dd0094852d8d0f452a8426eceb16c"} Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:15.998261 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb549d74c-6hlgt"] Jan 26 09:25:16 crc kubenswrapper[4827]: W0126 09:25:16.011704 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod053973de_195d_44a8_ba9f_d665b8a53c87.slice/crio-c491f29b520ffd0039417c1f85e61f8251c1b4654acbfa7dda0720de7dbb59cd WatchSource:0}: Error finding container c491f29b520ffd0039417c1f85e61f8251c1b4654acbfa7dda0720de7dbb59cd: Status 404 returned error can't find the container with id c491f29b520ffd0039417c1f85e61f8251c1b4654acbfa7dda0720de7dbb59cd Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.150709 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.261750 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config\") pod \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.261885 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc\") pod \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.261918 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb\") pod \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.261997 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v79ts\" (UniqueName: \"kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts\") pod \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.262021 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb\") pod \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\" (UID: \"5b6b8b42-302b-4d1f-af82-973aeed6e0a9\") " Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.273937 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts" (OuterVolumeSpecName: "kube-api-access-v79ts") pod "5b6b8b42-302b-4d1f-af82-973aeed6e0a9" (UID: "5b6b8b42-302b-4d1f-af82-973aeed6e0a9"). InnerVolumeSpecName "kube-api-access-v79ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.322539 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b6b8b42-302b-4d1f-af82-973aeed6e0a9" (UID: "5b6b8b42-302b-4d1f-af82-973aeed6e0a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.335770 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config" (OuterVolumeSpecName: "config") pod "5b6b8b42-302b-4d1f-af82-973aeed6e0a9" (UID: "5b6b8b42-302b-4d1f-af82-973aeed6e0a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.338854 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b6b8b42-302b-4d1f-af82-973aeed6e0a9" (UID: "5b6b8b42-302b-4d1f-af82-973aeed6e0a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.346819 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b6b8b42-302b-4d1f-af82-973aeed6e0a9" (UID: "5b6b8b42-302b-4d1f-af82-973aeed6e0a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.366569 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.366600 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.366613 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v79ts\" (UniqueName: \"kubernetes.io/projected/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-kube-api-access-v79ts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.366621 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.366631 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b8b42-302b-4d1f-af82-973aeed6e0a9-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.941979 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb549d74c-6hlgt" event={"ID":"053973de-195d-44a8-ba9f-d665b8a53c87","Type":"ContainerStarted","Data":"3450e91df8f6a93635fb5b0ddaf052ade3373832a6092e2c4302dd9db5fea25a"} Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.942062 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb549d74c-6hlgt" event={"ID":"053973de-195d-44a8-ba9f-d665b8a53c87","Type":"ContainerStarted","Data":"c491f29b520ffd0039417c1f85e61f8251c1b4654acbfa7dda0720de7dbb59cd"} Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.943263 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.951035 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" event={"ID":"5b6b8b42-302b-4d1f-af82-973aeed6e0a9","Type":"ContainerDied","Data":"dfd9b1238497611f9d43eee622faa26f5e6852c6f7f85407ab004b5c86061806"} Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.951105 4827 scope.go:117] "RemoveContainer" containerID="cfa2c3c108b503ab81b2fe989e0f61de3e3dd0094852d8d0f452a8426eceb16c" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.951239 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-4blrd" Jan 26 09:25:16 crc kubenswrapper[4827]: I0126 09:25:16.982838 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bb549d74c-6hlgt" podStartSLOduration=2.982792472 podStartE2EDuration="2.982792472s" podCreationTimestamp="2026-01-26 09:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:16.981887697 +0000 UTC m=+1145.630559526" watchObservedRunningTime="2026-01-26 09:25:16.982792472 +0000 UTC m=+1145.631464291" Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.038780 4827 scope.go:117] "RemoveContainer" containerID="cd23abdd3cbcecdae4dde34202c563de68b4a0543fc63e54161d6aad1ba72071" Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.067947 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.073452 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-4blrd"] Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.712244 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" path="/var/lib/kubelet/pods/5b6b8b42-302b-4d1f-af82-973aeed6e0a9/volumes" Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.960279 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bpxp5" event={"ID":"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66","Type":"ContainerStarted","Data":"4eed3b95fbd98695b5c69b244c5fdc77296c222c3d1fc1cd0ad684fd7e0088d4"} Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.964159 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6fvd" event={"ID":"6ed11b79-49ca-4b9a-9ebc-413bb8032271","Type":"ContainerStarted","Data":"3d3e488a7d5c1fbe9209f942ad2577e1afbf38fe919014c568b91e0677f82a7b"} Jan 26 09:25:17 crc kubenswrapper[4827]: I0126 09:25:17.980201 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bpxp5" podStartSLOduration=4.45688846 podStartE2EDuration="43.980180269s" podCreationTimestamp="2026-01-26 09:24:34 +0000 UTC" firstStartedPulling="2026-01-26 09:24:36.676555878 +0000 UTC m=+1105.325227697" lastFinishedPulling="2026-01-26 09:25:16.199847687 +0000 UTC m=+1144.848519506" observedRunningTime="2026-01-26 09:25:17.978059332 +0000 UTC m=+1146.626731171" watchObservedRunningTime="2026-01-26 09:25:17.980180269 +0000 UTC m=+1146.628852088" Jan 26 09:25:18 crc kubenswrapper[4827]: I0126 09:25:18.001434 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s6fvd" podStartSLOduration=2.467414796 podStartE2EDuration="43.001415473s" podCreationTimestamp="2026-01-26 09:24:35 +0000 UTC" firstStartedPulling="2026-01-26 09:24:36.755081942 +0000 UTC m=+1105.403753761" lastFinishedPulling="2026-01-26 09:25:17.289082619 +0000 UTC m=+1145.937754438" observedRunningTime="2026-01-26 09:25:17.996362348 +0000 UTC m=+1146.645034187" watchObservedRunningTime="2026-01-26 09:25:18.001415473 +0000 UTC m=+1146.650087292" Jan 26 09:25:19 crc kubenswrapper[4827]: I0126 09:25:19.987427 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lwd9n" event={"ID":"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36","Type":"ContainerStarted","Data":"3b11929874ddbef020edc6afdae1bde44c9b79b19b0af4893bcbe4797a5d5f90"} Jan 26 09:25:20 crc kubenswrapper[4827]: I0126 09:25:20.002374 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lwd9n" podStartSLOduration=1.9328012509999999 podStartE2EDuration="45.002356826s" podCreationTimestamp="2026-01-26 09:24:35 +0000 UTC" firstStartedPulling="2026-01-26 09:24:36.462056853 +0000 UTC m=+1105.110728672" lastFinishedPulling="2026-01-26 09:25:19.531612428 +0000 UTC m=+1148.180284247" observedRunningTime="2026-01-26 09:25:20.00108121 +0000 UTC m=+1148.649753029" watchObservedRunningTime="2026-01-26 09:25:20.002356826 +0000 UTC m=+1148.651028645" Jan 26 09:25:21 crc kubenswrapper[4827]: I0126 09:25:21.007413 4827 generic.go:334] "Generic (PLEG): container finished" podID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" containerID="3d3e488a7d5c1fbe9209f942ad2577e1afbf38fe919014c568b91e0677f82a7b" exitCode=0 Jan 26 09:25:21 crc kubenswrapper[4827]: I0126 09:25:21.007656 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6fvd" event={"ID":"6ed11b79-49ca-4b9a-9ebc-413bb8032271","Type":"ContainerDied","Data":"3d3e488a7d5c1fbe9209f942ad2577e1afbf38fe919014c568b91e0677f82a7b"} Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.039422 4827 generic.go:334] "Generic (PLEG): container finished" podID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" containerID="4eed3b95fbd98695b5c69b244c5fdc77296c222c3d1fc1cd0ad684fd7e0088d4" exitCode=0 Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.039527 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bpxp5" event={"ID":"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66","Type":"ContainerDied","Data":"4eed3b95fbd98695b5c69b244c5fdc77296c222c3d1fc1cd0ad684fd7e0088d4"} Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.043626 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" containerID="3b11929874ddbef020edc6afdae1bde44c9b79b19b0af4893bcbe4797a5d5f90" exitCode=0 Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.043699 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lwd9n" event={"ID":"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36","Type":"ContainerDied","Data":"3b11929874ddbef020edc6afdae1bde44c9b79b19b0af4893bcbe4797a5d5f90"} Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.438527 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6fvd" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.546274 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts\") pod \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.546550 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbkk\" (UniqueName: \"kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk\") pod \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.546609 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data\") pod \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.546714 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs\") pod \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.546796 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle\") pod \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\" (UID: \"6ed11b79-49ca-4b9a-9ebc-413bb8032271\") " Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.547166 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs" (OuterVolumeSpecName: "logs") pod "6ed11b79-49ca-4b9a-9ebc-413bb8032271" (UID: "6ed11b79-49ca-4b9a-9ebc-413bb8032271"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.547389 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed11b79-49ca-4b9a-9ebc-413bb8032271-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.550928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk" (OuterVolumeSpecName: "kube-api-access-grbkk") pod "6ed11b79-49ca-4b9a-9ebc-413bb8032271" (UID: "6ed11b79-49ca-4b9a-9ebc-413bb8032271"). InnerVolumeSpecName "kube-api-access-grbkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.551274 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts" (OuterVolumeSpecName: "scripts") pod "6ed11b79-49ca-4b9a-9ebc-413bb8032271" (UID: "6ed11b79-49ca-4b9a-9ebc-413bb8032271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.570122 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data" (OuterVolumeSpecName: "config-data") pod "6ed11b79-49ca-4b9a-9ebc-413bb8032271" (UID: "6ed11b79-49ca-4b9a-9ebc-413bb8032271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.573167 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ed11b79-49ca-4b9a-9ebc-413bb8032271" (UID: "6ed11b79-49ca-4b9a-9ebc-413bb8032271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.648520 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbkk\" (UniqueName: \"kubernetes.io/projected/6ed11b79-49ca-4b9a-9ebc-413bb8032271-kube-api-access-grbkk\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.648557 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.648567 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:25 crc kubenswrapper[4827]: I0126 09:25:25.648578 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed11b79-49ca-4b9a-9ebc-413bb8032271-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.055558 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6fvd" Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.055557 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6fvd" event={"ID":"6ed11b79-49ca-4b9a-9ebc-413bb8032271","Type":"ContainerDied","Data":"e95a08f1d4fb80122d7e98df48f83c24bbecf99f60391c6906ebdd65709c431e"} Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058053 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95a08f1d4fb80122d7e98df48f83c24bbecf99f60391c6906ebdd65709c431e" Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058476 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerStarted","Data":"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743"} Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058752 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-central-agent" containerID="cri-o://b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147" gracePeriod=30 Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058854 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="proxy-httpd" containerID="cri-o://dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743" gracePeriod=30 Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058904 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="sg-core" containerID="cri-o://4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57" gracePeriod=30 Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.058945 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-notification-agent" containerID="cri-o://ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7" gracePeriod=30 Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.059051 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:25:26 crc kubenswrapper[4827]: I0126 09:25:26.126400 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.910575781 podStartE2EDuration="52.126374859s" podCreationTimestamp="2026-01-26 09:24:34 +0000 UTC" firstStartedPulling="2026-01-26 09:24:36.220365324 +0000 UTC m=+1104.869037143" lastFinishedPulling="2026-01-26 09:25:25.436164402 +0000 UTC m=+1154.084836221" observedRunningTime="2026-01-26 09:25:26.113389247 +0000 UTC m=+1154.762061106" watchObservedRunningTime="2026-01-26 09:25:26.126374859 +0000 UTC m=+1154.775046698" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.610611 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.616028 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data\") pod \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665189 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665222 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665247 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle\") pod \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665270 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq4vk\" (UniqueName: \"kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk\") pod \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\" (UID: \"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665313 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665335 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.665374 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gj7t\" (UniqueName: \"kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.694106 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t" (OuterVolumeSpecName: "kube-api-access-2gj7t") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "kube-api-access-2gj7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.718868 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts" (OuterVolumeSpecName: "scripts") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.719777 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" (UID: "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.719851 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.726530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk" (OuterVolumeSpecName: "kube-api-access-tq4vk") pod "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" (UID: "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36"). InnerVolumeSpecName "kube-api-access-tq4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.734301 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.754800 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84fd67f47d-vt6sw"] Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:26.755187 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="init" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755199 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="init" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:26.755225 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" containerName="barbican-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755233 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" containerName="barbican-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:26.755249 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" containerName="cinder-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755255 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" containerName="cinder-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:26.755263 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="dnsmasq-dns" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755270 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="dnsmasq-dns" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:26.755283 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" containerName="placement-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755289 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" containerName="placement-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755449 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" containerName="placement-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755463 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" containerName="cinder-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755470 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6b8b42-302b-4d1f-af82-973aeed6e0a9" containerName="dnsmasq-dns" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.755481 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" containerName="barbican-db-sync" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.756361 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.758261 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.760126 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w7cfb" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.760418 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.760473 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.762273 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.765333 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" (UID: "a0129b71-c166-4c4d-b8e9-c7f1f1acdd36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.766790 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id\") pod \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\" (UID: \"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770282 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-internal-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770333 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-combined-ca-bundle\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770371 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-scripts\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770392 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/225ee5ae-fc10-4dd9-af29-0d227dd81802-logs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770457 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pd9\" (UniqueName: \"kubernetes.io/projected/225ee5ae-fc10-4dd9-af29-0d227dd81802-kube-api-access-92pd9\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770529 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-public-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770566 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84fd67f47d-vt6sw"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.770608 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-config-data\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.767789 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773055 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773083 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773095 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773103 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773113 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq4vk\" (UniqueName: \"kubernetes.io/projected/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36-kube-api-access-tq4vk\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773124 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773133 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.773143 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gj7t\" (UniqueName: \"kubernetes.io/projected/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-kube-api-access-2gj7t\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.803857 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data" (OuterVolumeSpecName: "config-data") pod "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" (UID: "8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.874814 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-public-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.874903 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-config-data\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.874986 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-internal-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.875010 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-combined-ca-bundle\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.875042 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-scripts\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.875079 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/225ee5ae-fc10-4dd9-af29-0d227dd81802-logs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.875122 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pd9\" (UniqueName: \"kubernetes.io/projected/225ee5ae-fc10-4dd9-af29-0d227dd81802-kube-api-access-92pd9\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.875176 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.876375 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/225ee5ae-fc10-4dd9-af29-0d227dd81802-logs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.879579 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-internal-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.880209 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-combined-ca-bundle\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.883212 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-public-tls-certs\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.885094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-scripts\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.887288 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/225ee5ae-fc10-4dd9-af29-0d227dd81802-config-data\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:26.892190 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pd9\" (UniqueName: \"kubernetes.io/projected/225ee5ae-fc10-4dd9-af29-0d227dd81802-kube-api-access-92pd9\") pod \"placement-84fd67f47d-vt6sw\" (UID: \"225ee5ae-fc10-4dd9-af29-0d227dd81802\") " pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068842 4827 generic.go:334] "Generic (PLEG): container finished" podID="46606188-20d6-4a48-9ff3-26012755c942" containerID="dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743" exitCode=0 Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068866 4827 generic.go:334] "Generic (PLEG): container finished" podID="46606188-20d6-4a48-9ff3-26012755c942" containerID="4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57" exitCode=2 Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068875 4827 generic.go:334] "Generic (PLEG): container finished" podID="46606188-20d6-4a48-9ff3-26012755c942" containerID="b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147" exitCode=0 Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068913 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerDied","Data":"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743"} Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068938 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerDied","Data":"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57"} Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.068949 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerDied","Data":"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147"} Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.070941 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lwd9n" event={"ID":"a0129b71-c166-4c4d-b8e9-c7f1f1acdd36","Type":"ContainerDied","Data":"3eb24f441e9e26bbe97d88532431b6a99b3480c4b1fc4513a1ed44d71a474dab"} Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.070961 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb24f441e9e26bbe97d88532431b6a99b3480c4b1fc4513a1ed44d71a474dab" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.071009 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lwd9n" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.080972 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bpxp5" event={"ID":"8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66","Type":"ContainerDied","Data":"a184984f770bfcfabacec2064d36c1eadd208a5e680c838210c5d16021c972d7"} Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.081009 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a184984f770bfcfabacec2064d36c1eadd208a5e680c838210c5d16021c972d7" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.082868 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bpxp5" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.096628 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.436700 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.438219 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.439405 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7bb7c7c765-wmktj"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.440933 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.458170 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.458385 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.458502 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.458819 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.459354 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c886m" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.459506 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5qpq5" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.459631 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.495327 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bb7c7c765-wmktj"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.529581 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.576869 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7665698578-xljwl"] Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.578495 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590316 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590371 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tbdh\" (UniqueName: \"kubernetes.io/projected/ed6134dd-363a-49bb-99bb-6bac419c845a-kube-api-access-7tbdh\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590396 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590446 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590466 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-combined-ca-bundle\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590480 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6134dd-363a-49bb-99bb-6bac419c845a-logs\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590512 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data-custom\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590543 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590557 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qzzl\" (UniqueName: \"kubernetes.io/projected/13b70f90-293b-4b38-be0f-0e5bde0c5e85-kube-api-access-8qzzl\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590589 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-combined-ca-bundle\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590611 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590630 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data-custom\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590673 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.590734 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b70f90-293b-4b38-be0f-0e5bde0c5e85-logs\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.613334 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.643630 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.695028 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.695886 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.696440 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.696547 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.696739 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data-custom\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.696875 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.696947 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697017 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b70f90-293b-4b38-be0f-0e5bde0c5e85-logs\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697125 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697204 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tbdh\" (UniqueName: \"kubernetes.io/projected/ed6134dd-363a-49bb-99bb-6bac419c845a-kube-api-access-7tbdh\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697282 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697355 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697418 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697479 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-combined-ca-bundle\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6134dd-363a-49bb-99bb-6bac419c845a-logs\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697611 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data-custom\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697727 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697814 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qzzl\" (UniqueName: \"kubernetes.io/projected/13b70f90-293b-4b38-be0f-0e5bde0c5e85-kube-api-access-8qzzl\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697880 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-combined-ca-bundle\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.698569 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.699073 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.752866 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b70f90-293b-4b38-be0f-0e5bde0c5e85-logs\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.755383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6134dd-363a-49bb-99bb-6bac419c845a-logs\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.772573 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.772579 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data-custom\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.773465 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.697131 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.773843 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-config-data-custom\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.775152 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts" (OuterVolumeSpecName: "scripts") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.810579 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.810750 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.810800 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhcwc\" (UniqueName: \"kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc\") pod \"46606188-20d6-4a48-9ff3-26012755c942\" (UID: \"46606188-20d6-4a48-9ff3-26012755c942\") " Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.812044 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.812066 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.812083 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46606188-20d6-4a48-9ff3-26012755c942-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.813069 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-config-data\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.813652 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.832572 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.833318 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.833949 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b70f90-293b-4b38-be0f-0e5bde0c5e85-combined-ca-bundle\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.834501 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6134dd-363a-49bb-99bb-6bac419c845a-combined-ca-bundle\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.880461 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf\") pod \"cinder-scheduler-0\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.881496 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qzzl\" (UniqueName: \"kubernetes.io/projected/13b70f90-293b-4b38-be0f-0e5bde0c5e85-kube-api-access-8qzzl\") pod \"barbican-keystone-listener-7665698578-xljwl\" (UID: \"13b70f90-293b-4b38-be0f-0e5bde0c5e85\") " pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.934917 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc" (OuterVolumeSpecName: "kube-api-access-nhcwc") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "kube-api-access-nhcwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.958414 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tbdh\" (UniqueName: \"kubernetes.io/projected/ed6134dd-363a-49bb-99bb-6bac419c845a-kube-api-access-7tbdh\") pod \"barbican-worker-7bb7c7c765-wmktj\" (UID: \"ed6134dd-363a-49bb-99bb-6bac419c845a\") " pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.979562 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc7cc4dcf-sxblk"] Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:27.980533 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="proxy-httpd" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.980717 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="proxy-httpd" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:27.980810 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="sg-core" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.980915 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="sg-core" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:27.980974 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-notification-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.981021 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-notification-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: E0126 09:25:27.981205 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-central-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.981280 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-central-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.982071 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-notification-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.982195 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="sg-core" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.982285 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="proxy-httpd" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.982391 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46606188-20d6-4a48-9ff3-26012755c942" containerName="ceilometer-central-agent" Jan 26 09:25:27 crc kubenswrapper[4827]: I0126 09:25:27.984134 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.026001 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhcwc\" (UniqueName: \"kubernetes.io/projected/46606188-20d6-4a48-9ff3-26012755c942-kube-api-access-nhcwc\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.035029 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7665698578-xljwl" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.046857 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc7cc4dcf-sxblk"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.056180 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.056472 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.079774 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84fd67f47d-vt6sw"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.086124 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.097724 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bb7c7c765-wmktj" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127505 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrkt\" (UniqueName: \"kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127553 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127593 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127656 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127694 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.127705 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.129769 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7665698578-xljwl"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.174846 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc7cc4dcf-sxblk"] Jan 26 09:25:28 crc kubenswrapper[4827]: E0126 09:25:28.175498 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-thrkt ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" podUID="07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.201328 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.202620 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.208522 4827 generic.go:334] "Generic (PLEG): container finished" podID="46606188-20d6-4a48-9ff3-26012755c942" containerID="ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7" exitCode=0 Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.208683 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerDied","Data":"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7"} Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.208710 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46606188-20d6-4a48-9ff3-26012755c942","Type":"ContainerDied","Data":"093721ddf83938312a017f4e883fbbd5bcece1621b76638639f82a684b85d35d"} Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.208820 4827 scope.go:117] "RemoveContainer" containerID="dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.208869 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.241309 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.241364 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.241410 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.241434 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.241487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrkt\" (UniqueName: \"kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.242420 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.242937 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.243429 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.244046 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.244099 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84fd67f47d-vt6sw" event={"ID":"225ee5ae-fc10-4dd9-af29-0d227dd81802","Type":"ContainerStarted","Data":"2ff74a4901da6488e270fed42ee99368256972396f8b4b18b42171a37530f385"} Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.271308 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.272441 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrkt\" (UniqueName: \"kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt\") pod \"dnsmasq-dns-5dc7cc4dcf-sxblk\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.283864 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data" (OuterVolumeSpecName: "config-data") pod "46606188-20d6-4a48-9ff3-26012755c942" (UID: "46606188-20d6-4a48-9ff3-26012755c942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.318704 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.320412 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.337969 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343576 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343631 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntg72\" (UniqueName: \"kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343763 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343811 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.343891 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46606188-20d6-4a48-9ff3-26012755c942-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.376784 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.418801 4827 scope.go:117] "RemoveContainer" containerID="4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.430884 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.432186 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.436054 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.443630 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448157 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448308 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448397 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448492 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448574 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448686 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448749 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448810 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntg72\" (UniqueName: \"kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448914 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dbg\" (UniqueName: \"kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.448979 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.449871 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.449890 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.450380 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.450731 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.523775 4827 scope.go:117] "RemoveContainer" containerID="ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.542788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntg72\" (UniqueName: \"kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72\") pod \"dnsmasq-dns-7474d577dc-4fhpr\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566300 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566346 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566385 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dbg\" (UniqueName: \"kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566405 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566460 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566487 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566506 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566521 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566559 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q866\" (UniqueName: \"kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.566598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.567543 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.587380 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.592392 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.596111 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.600896 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.614415 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dbg\" (UniqueName: \"kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg\") pod \"barbican-api-5d6d9c5fbd-b4nwv\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670779 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670797 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670811 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670836 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q866\" (UniqueName: \"kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670884 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.670926 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.671875 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.672151 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.677539 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.678233 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.696401 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.697403 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.711089 4827 scope.go:117] "RemoveContainer" containerID="b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.719584 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q866\" (UniqueName: \"kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866\") pod \"cinder-api-0\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.748016 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.774134 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.774737 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.808394 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.852063 4827 scope.go:117] "RemoveContainer" containerID="dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.852165 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.853985 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: E0126 09:25:28.860789 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743\": container with ID starting with dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743 not found: ID does not exist" containerID="dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.861014 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.861028 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743"} err="failed to get container status \"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743\": rpc error: code = NotFound desc = could not find container \"dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743\": container with ID starting with dcfbea5f9b28bbc9e6d265d152f1474327df650760222abcd3989444d03a5743 not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.861054 4827 scope.go:117] "RemoveContainer" containerID="4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.861288 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:25:28 crc kubenswrapper[4827]: E0126 09:25:28.864222 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57\": container with ID starting with 4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57 not found: ID does not exist" containerID="4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.864270 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57"} err="failed to get container status \"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57\": rpc error: code = NotFound desc = could not find container \"4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57\": container with ID starting with 4f6b31070d61105fc693b4aa2f9da526d49d1a7df486ae978961591b11b00c57 not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.864303 4827 scope.go:117] "RemoveContainer" containerID="ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7" Jan 26 09:25:28 crc kubenswrapper[4827]: E0126 09:25:28.865529 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7\": container with ID starting with ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7 not found: ID does not exist" containerID="ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.865567 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7"} err="failed to get container status \"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7\": rpc error: code = NotFound desc = could not find container \"ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7\": container with ID starting with ae3c66834b31fbce01c60539face8e0b1653d09e7ba3ab3c18b9a4c382e2abd7 not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.865588 4827 scope.go:117] "RemoveContainer" containerID="b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147" Jan 26 09:25:28 crc kubenswrapper[4827]: E0126 09:25:28.885301 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147\": container with ID starting with b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147 not found: ID does not exist" containerID="b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.885337 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147"} err="failed to get container status \"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147\": rpc error: code = NotFound desc = could not find container \"b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147\": container with ID starting with b427a21a2620136021d04a54fbdb59be1b7679cf7d3d5ffb8b67e44a5f861147 not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.888317 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979157 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979405 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979434 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979494 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979546 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:28 crc kubenswrapper[4827]: I0126 09:25:28.979613 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdc4\" (UniqueName: \"kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.081603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.081845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.081929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.082000 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.082062 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.082134 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.082204 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdc4\" (UniqueName: \"kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.082922 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.083194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.086755 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.090924 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.095104 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.098682 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.103400 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdc4\" (UniqueName: \"kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4\") pod \"ceilometer-0\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.191607 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.201783 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.277724 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84fd67f47d-vt6sw" event={"ID":"225ee5ae-fc10-4dd9-af29-0d227dd81802","Type":"ContainerStarted","Data":"1c82c3ab582d136a0480d1bd11b8c06912b80e21cd0d92c18e95fe7b218066e6"} Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.280496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerStarted","Data":"b8dcf27f017f1beb1f7d3b1dfa6e15eb204eca521f1bec96e5e3403666674902"} Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.287343 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.291952 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7665698578-xljwl"] Jan 26 09:25:29 crc kubenswrapper[4827]: W0126 09:25:29.302391 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13b70f90_293b_4b38_be0f_0e5bde0c5e85.slice/crio-feec64a021cdc6796f2a0663ec70bba68938a2d91c140f9fb80da114896b50ac WatchSource:0}: Error finding container feec64a021cdc6796f2a0663ec70bba68938a2d91c140f9fb80da114896b50ac: Status 404 returned error can't find the container with id feec64a021cdc6796f2a0663ec70bba68938a2d91c140f9fb80da114896b50ac Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.311110 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.386327 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrkt\" (UniqueName: \"kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt\") pod \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.386424 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb\") pod \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.386504 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config\") pod \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.386529 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb\") pod \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.386548 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc\") pod \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\" (UID: \"07dc334e-3105-4f2a-a1ec-6cf80ba4fc63\") " Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.389052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" (UID: "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.389610 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" (UID: "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.390134 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config" (OuterVolumeSpecName: "config") pod "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" (UID: "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.390545 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" (UID: "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.400950 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt" (OuterVolumeSpecName: "kube-api-access-thrkt") pod "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" (UID: "07dc334e-3105-4f2a-a1ec-6cf80ba4fc63"). InnerVolumeSpecName "kube-api-access-thrkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.437294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bb7c7c765-wmktj"] Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.490050 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.490092 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.490106 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.490118 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrkt\" (UniqueName: \"kubernetes.io/projected/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-kube-api-access-thrkt\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.490129 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.537565 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.725182 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46606188-20d6-4a48-9ff3-26012755c942" path="/var/lib/kubelet/pods/46606188-20d6-4a48-9ff3-26012755c942/volumes" Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.733791 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.760901 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:25:29 crc kubenswrapper[4827]: W0126 09:25:29.780785 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d94e18_5c09_4459_b330_861d46795409.slice/crio-f0b955b5aac44ac7073fd80a8dbf7294f1e98fe22c9f300f880e0220ea2a6342 WatchSource:0}: Error finding container f0b955b5aac44ac7073fd80a8dbf7294f1e98fe22c9f300f880e0220ea2a6342: Status 404 returned error can't find the container with id f0b955b5aac44ac7073fd80a8dbf7294f1e98fe22c9f300f880e0220ea2a6342 Jan 26 09:25:29 crc kubenswrapper[4827]: I0126 09:25:29.926554 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:25:30 crc kubenswrapper[4827]: W0126 09:25:30.176451 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd79e837_4610_45ba_b2b9_ee7f3e8d52eb.slice/crio-cd910e2200b4663058be37a9e9d559970805597863ceee3e09ad9ac0ce42c088 WatchSource:0}: Error finding container cd910e2200b4663058be37a9e9d559970805597863ceee3e09ad9ac0ce42c088: Status 404 returned error can't find the container with id cd910e2200b4663058be37a9e9d559970805597863ceee3e09ad9ac0ce42c088 Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.320788 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerStarted","Data":"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.321512 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerStarted","Data":"1d43777d0ca931b90b47dbe0bdc5d39cf1c486b38272b51f516ebfaa865121fb"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.339092 4827 generic.go:334] "Generic (PLEG): container finished" podID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerID="d4ebfbbbeab0d3fb68564e3184559fe52d26888027c7ba90986c3238ed2f287b" exitCode=0 Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.339148 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" event={"ID":"562ed53d-de3c-4e5b-9385-05d2564d587a","Type":"ContainerDied","Data":"d4ebfbbbeab0d3fb68564e3184559fe52d26888027c7ba90986c3238ed2f287b"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.339174 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" event={"ID":"562ed53d-de3c-4e5b-9385-05d2564d587a","Type":"ContainerStarted","Data":"cca2c58227e2c8872e1bca83239769d6fe4636e0af6867083d7382a12d98e8fc"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.343014 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7665698578-xljwl" event={"ID":"13b70f90-293b-4b38-be0f-0e5bde0c5e85","Type":"ContainerStarted","Data":"feec64a021cdc6796f2a0663ec70bba68938a2d91c140f9fb80da114896b50ac"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.344787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bb7c7c765-wmktj" event={"ID":"ed6134dd-363a-49bb-99bb-6bac419c845a","Type":"ContainerStarted","Data":"52f3056bbff94756f21e7fb4b690b75fb6b744347c92eb38b12059225b88c807"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.346375 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84fd67f47d-vt6sw" event={"ID":"225ee5ae-fc10-4dd9-af29-0d227dd81802","Type":"ContainerStarted","Data":"b5f18f12fa8605bf6e207f08abf79c43cba663c8bec2533d9a0fad21c9eb9775"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.347222 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.347247 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.348187 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerStarted","Data":"cd910e2200b4663058be37a9e9d559970805597863ceee3e09ad9ac0ce42c088"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.349346 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc7cc4dcf-sxblk" Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.349787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerStarted","Data":"f0b955b5aac44ac7073fd80a8dbf7294f1e98fe22c9f300f880e0220ea2a6342"} Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.413070 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84fd67f47d-vt6sw" podStartSLOduration=4.413052847 podStartE2EDuration="4.413052847s" podCreationTimestamp="2026-01-26 09:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:30.390459346 +0000 UTC m=+1159.039131165" watchObservedRunningTime="2026-01-26 09:25:30.413052847 +0000 UTC m=+1159.061724666" Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.632451 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc7cc4dcf-sxblk"] Jan 26 09:25:30 crc kubenswrapper[4827]: I0126 09:25:30.645996 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc7cc4dcf-sxblk"] Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.147493 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.385773 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerStarted","Data":"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637"} Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.386844 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.386878 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.414075 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" event={"ID":"562ed53d-de3c-4e5b-9385-05d2564d587a","Type":"ContainerStarted","Data":"a8b763d25114e9ecce43025a46304953346ea5f95ce4ab3b5465fd8260fa1f7c"} Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.414296 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.419521 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" podStartSLOduration=3.419498989 podStartE2EDuration="3.419498989s" podCreationTimestamp="2026-01-26 09:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:31.415477701 +0000 UTC m=+1160.064149520" watchObservedRunningTime="2026-01-26 09:25:31.419498989 +0000 UTC m=+1160.068170808" Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.427670 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerStarted","Data":"a48c616a853fbeb159e59853e8e14787aa07745dc93535de57429b57065b92fd"} Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.456292 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" podStartSLOduration=3.456268224 podStartE2EDuration="3.456268224s" podCreationTimestamp="2026-01-26 09:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:31.449266185 +0000 UTC m=+1160.097938014" watchObservedRunningTime="2026-01-26 09:25:31.456268224 +0000 UTC m=+1160.104940043" Jan 26 09:25:31 crc kubenswrapper[4827]: I0126 09:25:31.728131 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07dc334e-3105-4f2a-a1ec-6cf80ba4fc63" path="/var/lib/kubelet/pods/07dc334e-3105-4f2a-a1ec-6cf80ba4fc63/volumes" Jan 26 09:25:32 crc kubenswrapper[4827]: I0126 09:25:32.437913 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerStarted","Data":"84393af4b2b89406d263dc26282f739c08d24b8897bc46c1d1ea0959e25043fe"} Jan 26 09:25:32 crc kubenswrapper[4827]: I0126 09:25:32.440255 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerStarted","Data":"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.449429 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerStarted","Data":"0f3193006acfb5e82192e2d07b62233c666cf4c4dd739f05a239d217382e26bb"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.451562 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bb7c7c765-wmktj" event={"ID":"ed6134dd-363a-49bb-99bb-6bac419c845a","Type":"ContainerStarted","Data":"c6b4192916b5f14870e4d38bee635ae7b56b606dcebd667e9547ba0d81183b65"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.451611 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bb7c7c765-wmktj" event={"ID":"ed6134dd-363a-49bb-99bb-6bac419c845a","Type":"ContainerStarted","Data":"52c98ccbf33412093fb7ea7ee4a8aeb262b772e7a37cbe6912aa53fb31b9074c"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.453493 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerStarted","Data":"9bbee3a7a8f3442a237cbd2448763f1d734a2969275dc04a69572f721c8062c0"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.455463 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerStarted","Data":"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.455584 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api-log" containerID="cri-o://b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757" gracePeriod=30 Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.455801 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.455833 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" containerID="cri-o://546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417" gracePeriod=30 Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.458510 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7665698578-xljwl" event={"ID":"13b70f90-293b-4b38-be0f-0e5bde0c5e85","Type":"ContainerStarted","Data":"b4f1444a58ea45a7ab80af75bc471bb80073535db5d15a648951587fd510a0cb"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.458542 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7665698578-xljwl" event={"ID":"13b70f90-293b-4b38-be0f-0e5bde0c5e85","Type":"ContainerStarted","Data":"5acf3e575fd7011017ed140f0b49dc717b9a26b19cd7ff79a60f53000f94894a"} Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.476225 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.504844107 podStartE2EDuration="6.47620928s" podCreationTimestamp="2026-01-26 09:25:27 +0000 UTC" firstStartedPulling="2026-01-26 09:25:29.236937784 +0000 UTC m=+1157.885609593" lastFinishedPulling="2026-01-26 09:25:30.208302947 +0000 UTC m=+1158.856974766" observedRunningTime="2026-01-26 09:25:33.468584364 +0000 UTC m=+1162.117256183" watchObservedRunningTime="2026-01-26 09:25:33.47620928 +0000 UTC m=+1162.124881089" Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.517897 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7bb7c7c765-wmktj" podStartSLOduration=3.379607391 podStartE2EDuration="6.517878017s" podCreationTimestamp="2026-01-26 09:25:27 +0000 UTC" firstStartedPulling="2026-01-26 09:25:29.47296765 +0000 UTC m=+1158.121639469" lastFinishedPulling="2026-01-26 09:25:32.611238276 +0000 UTC m=+1161.259910095" observedRunningTime="2026-01-26 09:25:33.490295411 +0000 UTC m=+1162.138967230" watchObservedRunningTime="2026-01-26 09:25:33.517878017 +0000 UTC m=+1162.166549836" Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.519829 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.51981814 podStartE2EDuration="5.51981814s" podCreationTimestamp="2026-01-26 09:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:33.51313795 +0000 UTC m=+1162.161809769" watchObservedRunningTime="2026-01-26 09:25:33.51981814 +0000 UTC m=+1162.168489959" Jan 26 09:25:33 crc kubenswrapper[4827]: I0126 09:25:33.546242 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7665698578-xljwl" podStartSLOduration=3.247842757 podStartE2EDuration="6.546224364s" podCreationTimestamp="2026-01-26 09:25:27 +0000 UTC" firstStartedPulling="2026-01-26 09:25:29.311543333 +0000 UTC m=+1157.960215152" lastFinishedPulling="2026-01-26 09:25:32.60992494 +0000 UTC m=+1161.258596759" observedRunningTime="2026-01-26 09:25:33.543452399 +0000 UTC m=+1162.192124218" watchObservedRunningTime="2026-01-26 09:25:33.546224364 +0000 UTC m=+1162.194896183" Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.467917 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerStarted","Data":"01966f16350da6d781059ed993a6700820f908199480f42fce076dcc9e3befc8"} Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.469713 4827 generic.go:334] "Generic (PLEG): container finished" podID="25d94e18-5c09-4459-b330-861d46795409" containerID="b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757" exitCode=143 Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.470519 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerDied","Data":"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757"} Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.954877 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-549f46df88-ldq7r"] Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.956353 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.962679 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.969875 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 09:25:34 crc kubenswrapper[4827]: I0126 09:25:34.976879 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-549f46df88-ldq7r"] Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-public-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019842 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-combined-ca-bundle\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019879 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mc7t\" (UniqueName: \"kubernetes.io/projected/149ae16b-d620-417f-a9df-0ff3864c7d08-kube-api-access-7mc7t\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019908 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-internal-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019925 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019960 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149ae16b-d620-417f-a9df-0ff3864c7d08-logs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.019982 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data-custom\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.125902 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-public-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.125963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-combined-ca-bundle\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.126003 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mc7t\" (UniqueName: \"kubernetes.io/projected/149ae16b-d620-417f-a9df-0ff3864c7d08-kube-api-access-7mc7t\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.126037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-internal-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.126059 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.126104 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149ae16b-d620-417f-a9df-0ff3864c7d08-logs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.126153 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data-custom\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.157491 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data-custom\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.159471 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149ae16b-d620-417f-a9df-0ff3864c7d08-logs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.163691 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-combined-ca-bundle\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.164415 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-public-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.173163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-internal-tls-certs\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.199146 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149ae16b-d620-417f-a9df-0ff3864c7d08-config-data\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.200569 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mc7t\" (UniqueName: \"kubernetes.io/projected/149ae16b-d620-417f-a9df-0ff3864c7d08-kube-api-access-7mc7t\") pod \"barbican-api-549f46df88-ldq7r\" (UID: \"149ae16b-d620-417f-a9df-0ff3864c7d08\") " pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.271475 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.514218 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.803651 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-549f46df88-ldq7r"] Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.824828 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.825132 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67459d4777-j9nd4" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-api" containerID="cri-o://6d12d642ac5ab0d430051ee70a591a7a7426e99a24231cad89c6e7d47149f57b" gracePeriod=30 Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.825680 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67459d4777-j9nd4" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" containerID="cri-o://433ab8f3e1f6c1f20eeefadd26526c3c5d5afd5fd6b59db9adea299ab847a45c" gracePeriod=30 Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.887835 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bdc4699d9-tnd4c"] Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.889483 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.911032 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bdc4699d9-tnd4c"] Jan 26 09:25:35 crc kubenswrapper[4827]: I0126 09:25:35.940173 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-67459d4777-j9nd4" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": read tcp 10.217.0.2:58594->10.217.0.141:9696: read: connection reset by peer" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052196 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkw4v\" (UniqueName: \"kubernetes.io/projected/09595eb4-a10d-44f0-9aee-927389e0accb-kube-api-access-zkw4v\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052307 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-combined-ca-bundle\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052328 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052399 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-internal-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052503 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-public-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052561 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-ovndb-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.052584 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-httpd-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154305 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-public-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154376 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-ovndb-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154392 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-httpd-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154444 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkw4v\" (UniqueName: \"kubernetes.io/projected/09595eb4-a10d-44f0-9aee-927389e0accb-kube-api-access-zkw4v\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-combined-ca-bundle\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154747 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.154767 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-internal-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.158924 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-httpd-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.163432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-combined-ca-bundle\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.163902 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-internal-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.166472 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-ovndb-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.167179 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-public-tls-certs\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.177764 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/09595eb4-a10d-44f0-9aee-927389e0accb-config\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.212309 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkw4v\" (UniqueName: \"kubernetes.io/projected/09595eb4-a10d-44f0-9aee-927389e0accb-kube-api-access-zkw4v\") pod \"neutron-7bdc4699d9-tnd4c\" (UID: \"09595eb4-a10d-44f0-9aee-927389e0accb\") " pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.224215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.529913 4827 generic.go:334] "Generic (PLEG): container finished" podID="73209569-8a53-46a7-a420-4864c674bc82" containerID="433ab8f3e1f6c1f20eeefadd26526c3c5d5afd5fd6b59db9adea299ab847a45c" exitCode=0 Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.530194 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerDied","Data":"433ab8f3e1f6c1f20eeefadd26526c3c5d5afd5fd6b59db9adea299ab847a45c"} Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.539354 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-549f46df88-ldq7r" event={"ID":"149ae16b-d620-417f-a9df-0ff3864c7d08","Type":"ContainerStarted","Data":"56d2e3d6d6b8cff29556d66f1aa45e840cb1feb09bd9f86bbd8f3c606b4cbf5c"} Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.539411 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-549f46df88-ldq7r" event={"ID":"149ae16b-d620-417f-a9df-0ff3864c7d08","Type":"ContainerStarted","Data":"03c1a775fd1ddc58a13df053487360bdcf5182161a214c774f574f30f33396b7"} Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.539432 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.539445 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.539455 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-549f46df88-ldq7r" event={"ID":"149ae16b-d620-417f-a9df-0ff3864c7d08","Type":"ContainerStarted","Data":"d2fa4b1f2dbed28a0bc86e54400d173fc9a3e0419b41e4b33da29c57c6f4357c"} Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.545252 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerStarted","Data":"2f667ca041264aeef9e087fa2afc8764eee1b165548eccd01d70764c550b2019"} Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.546380 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.565929 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-549f46df88-ldq7r" podStartSLOduration=2.565913921 podStartE2EDuration="2.565913921s" podCreationTimestamp="2026-01-26 09:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:36.563741793 +0000 UTC m=+1165.212413612" watchObservedRunningTime="2026-01-26 09:25:36.565913921 +0000 UTC m=+1165.214585730" Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.595688 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.853193526 podStartE2EDuration="8.595665726s" podCreationTimestamp="2026-01-26 09:25:28 +0000 UTC" firstStartedPulling="2026-01-26 09:25:30.200005453 +0000 UTC m=+1158.848677272" lastFinishedPulling="2026-01-26 09:25:35.942477653 +0000 UTC m=+1164.591149472" observedRunningTime="2026-01-26 09:25:36.588217544 +0000 UTC m=+1165.236889383" watchObservedRunningTime="2026-01-26 09:25:36.595665726 +0000 UTC m=+1165.244337545" Jan 26 09:25:36 crc kubenswrapper[4827]: W0126 09:25:36.957228 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09595eb4_a10d_44f0_9aee_927389e0accb.slice/crio-e6ed407edb71ce920f3935ece1cf19cf034132f90d00f33483894e6f72648fa7 WatchSource:0}: Error finding container e6ed407edb71ce920f3935ece1cf19cf034132f90d00f33483894e6f72648fa7: Status 404 returned error can't find the container with id e6ed407edb71ce920f3935ece1cf19cf034132f90d00f33483894e6f72648fa7 Jan 26 09:25:36 crc kubenswrapper[4827]: I0126 09:25:36.984458 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bdc4699d9-tnd4c"] Jan 26 09:25:37 crc kubenswrapper[4827]: I0126 09:25:37.595847 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bdc4699d9-tnd4c" event={"ID":"09595eb4-a10d-44f0-9aee-927389e0accb","Type":"ContainerStarted","Data":"bc5db55c8cef1be161c94b73388b843731dd17fd01d412cdb8bcb9aa2646c836"} Jan 26 09:25:37 crc kubenswrapper[4827]: I0126 09:25:37.596123 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bdc4699d9-tnd4c" event={"ID":"09595eb4-a10d-44f0-9aee-927389e0accb","Type":"ContainerStarted","Data":"e6ed407edb71ce920f3935ece1cf19cf034132f90d00f33483894e6f72648fa7"} Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.000236 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-67459d4777-j9nd4" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.087245 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.537125 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.605035 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.621200 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bdc4699d9-tnd4c" event={"ID":"09595eb4-a10d-44f0-9aee-927389e0accb","Type":"ContainerStarted","Data":"a3a3e1c74c7c06aa5de2e75d34e03f8a357b524c9da671e15763680a8c662966"} Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.668094 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bdc4699d9-tnd4c" podStartSLOduration=3.668079242 podStartE2EDuration="3.668079242s" podCreationTimestamp="2026-01-26 09:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:38.660696872 +0000 UTC m=+1167.309368691" watchObservedRunningTime="2026-01-26 09:25:38.668079242 +0000 UTC m=+1167.316751051" Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.691151 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.691398 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-548894858c-6mmz8" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="dnsmasq-dns" containerID="cri-o://e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e" gracePeriod=10 Jan 26 09:25:38 crc kubenswrapper[4827]: I0126 09:25:38.766646 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.226491 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.331578 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb\") pod \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.331660 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb\") pod \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.331765 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prk4x\" (UniqueName: \"kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x\") pod \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.331828 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc\") pod \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.331871 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config\") pod \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\" (UID: \"cea11c1d-cffb-4de8-ada4-c74439fa04c5\") " Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.342046 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x" (OuterVolumeSpecName: "kube-api-access-prk4x") pod "cea11c1d-cffb-4de8-ada4-c74439fa04c5" (UID: "cea11c1d-cffb-4de8-ada4-c74439fa04c5"). InnerVolumeSpecName "kube-api-access-prk4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.428536 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cea11c1d-cffb-4de8-ada4-c74439fa04c5" (UID: "cea11c1d-cffb-4de8-ada4-c74439fa04c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.433779 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prk4x\" (UniqueName: \"kubernetes.io/projected/cea11c1d-cffb-4de8-ada4-c74439fa04c5-kube-api-access-prk4x\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.433811 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.477103 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cea11c1d-cffb-4de8-ada4-c74439fa04c5" (UID: "cea11c1d-cffb-4de8-ada4-c74439fa04c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.477438 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cea11c1d-cffb-4de8-ada4-c74439fa04c5" (UID: "cea11c1d-cffb-4de8-ada4-c74439fa04c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.484288 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config" (OuterVolumeSpecName: "config") pod "cea11c1d-cffb-4de8-ada4-c74439fa04c5" (UID: "cea11c1d-cffb-4de8-ada4-c74439fa04c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.535692 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.535735 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.535748 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea11c1d-cffb-4de8-ada4-c74439fa04c5-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.674596 4827 generic.go:334] "Generic (PLEG): container finished" podID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerID="e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e" exitCode=0 Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.675529 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-6mmz8" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.689603 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-6mmz8" event={"ID":"cea11c1d-cffb-4de8-ada4-c74439fa04c5","Type":"ContainerDied","Data":"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e"} Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.691418 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="cinder-scheduler" containerID="cri-o://84393af4b2b89406d263dc26282f739c08d24b8897bc46c1d1ea0959e25043fe" gracePeriod=30 Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.691667 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="probe" containerID="cri-o://0f3193006acfb5e82192e2d07b62233c666cf4c4dd739f05a239d217382e26bb" gracePeriod=30 Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.692275 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.692318 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-6mmz8" event={"ID":"cea11c1d-cffb-4de8-ada4-c74439fa04c5","Type":"ContainerDied","Data":"86e70988960f5d7af89b51e2b7882398769312e0052ec1fdc3691b320c1eda3f"} Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.692351 4827 scope.go:117] "RemoveContainer" containerID="e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.753999 4827 scope.go:117] "RemoveContainer" containerID="2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.761019 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.782982 4827 scope.go:117] "RemoveContainer" containerID="e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.783686 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548894858c-6mmz8"] Jan 26 09:25:39 crc kubenswrapper[4827]: E0126 09:25:39.783942 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e\": container with ID starting with e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e not found: ID does not exist" containerID="e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.783990 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e"} err="failed to get container status \"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e\": rpc error: code = NotFound desc = could not find container \"e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e\": container with ID starting with e4eebf5eafcbc94ff2bf12100cc1638496d141e5e2647466fe1c491a1f31af7e not found: ID does not exist" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.784021 4827 scope.go:117] "RemoveContainer" containerID="2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f" Jan 26 09:25:39 crc kubenswrapper[4827]: E0126 09:25:39.784998 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f\": container with ID starting with 2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f not found: ID does not exist" containerID="2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f" Jan 26 09:25:39 crc kubenswrapper[4827]: I0126 09:25:39.785026 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f"} err="failed to get container status \"2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f\": rpc error: code = NotFound desc = could not find container \"2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f\": container with ID starting with 2176e76a5e5cdf2b02faa8d2bc6417f93584bd7fd37ad9487dc83be6058c488f not found: ID does not exist" Jan 26 09:25:41 crc kubenswrapper[4827]: I0126 09:25:41.728984 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" path="/var/lib/kubelet/pods/cea11c1d-cffb-4de8-ada4-c74439fa04c5/volumes" Jan 26 09:25:41 crc kubenswrapper[4827]: I0126 09:25:41.730138 4827 generic.go:334] "Generic (PLEG): container finished" podID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerID="0f3193006acfb5e82192e2d07b62233c666cf4c4dd739f05a239d217382e26bb" exitCode=0 Jan 26 09:25:41 crc kubenswrapper[4827]: I0126 09:25:41.730151 4827 generic.go:334] "Generic (PLEG): container finished" podID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerID="84393af4b2b89406d263dc26282f739c08d24b8897bc46c1d1ea0959e25043fe" exitCode=0 Jan 26 09:25:41 crc kubenswrapper[4827]: I0126 09:25:41.730177 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerDied","Data":"0f3193006acfb5e82192e2d07b62233c666cf4c4dd739f05a239d217382e26bb"} Jan 26 09:25:41 crc kubenswrapper[4827]: I0126 09:25:41.730196 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerDied","Data":"84393af4b2b89406d263dc26282f739c08d24b8897bc46c1d1ea0959e25043fe"} Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.453361 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.453410 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.453450 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.454460 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.454525 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d" gracePeriod=600 Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.622119 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746083 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746418 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746540 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746584 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746658 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.746703 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle\") pod \"c549b7ce-615d-467b-8e6f-4387a0d49e28\" (UID: \"c549b7ce-615d-467b-8e6f-4387a0d49e28\") " Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.748730 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.762568 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.763921 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf" (OuterVolumeSpecName: "kube-api-access-8bqbf") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "kube-api-access-8bqbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.770924 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts" (OuterVolumeSpecName: "scripts") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.809190 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d" exitCode=0 Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.809417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d"} Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.809527 4827 scope.go:117] "RemoveContainer" containerID="bba95a5a0a0bb732dcf7490782c5031e5ab6ba85fa5414f4a4c7981058105c9e" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.842994 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.843286 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c549b7ce-615d-467b-8e6f-4387a0d49e28","Type":"ContainerDied","Data":"b8dcf27f017f1beb1f7d3b1dfa6e15eb204eca521f1bec96e5e3403666674902"} Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.843126 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.848777 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c549b7ce-615d-467b-8e6f-4387a0d49e28-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.848818 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.848830 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/c549b7ce-615d-467b-8e6f-4387a0d49e28-kube-api-access-8bqbf\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.848838 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.901754 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.925228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data" (OuterVolumeSpecName: "config-data") pod "c549b7ce-615d-467b-8e6f-4387a0d49e28" (UID: "c549b7ce-615d-467b-8e6f-4387a0d49e28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.926076 4827 scope.go:117] "RemoveContainer" containerID="0f3193006acfb5e82192e2d07b62233c666cf4c4dd739f05a239d217382e26bb" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.951502 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:42 crc kubenswrapper[4827]: I0126 09:25:42.951547 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c549b7ce-615d-467b-8e6f-4387a0d49e28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.005591 4827 scope.go:117] "RemoveContainer" containerID="84393af4b2b89406d263dc26282f739c08d24b8897bc46c1d1ea0959e25043fe" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.175902 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.188077 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217089 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:43 crc kubenswrapper[4827]: E0126 09:25:43.217442 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="init" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217462 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="init" Jan 26 09:25:43 crc kubenswrapper[4827]: E0126 09:25:43.217482 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="probe" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217488 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="probe" Jan 26 09:25:43 crc kubenswrapper[4827]: E0126 09:25:43.217505 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="dnsmasq-dns" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217512 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="dnsmasq-dns" Jan 26 09:25:43 crc kubenswrapper[4827]: E0126 09:25:43.217528 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="cinder-scheduler" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217535 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="cinder-scheduler" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217745 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea11c1d-cffb-4de8-ada4-c74439fa04c5" containerName="dnsmasq-dns" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217760 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="probe" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.217772 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" containerName="cinder-scheduler" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.218657 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.220676 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.233218 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356241 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356294 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356383 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356410 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356461 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.356481 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjz9\" (UniqueName: \"kubernetes.io/projected/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-kube-api-access-4cjz9\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.376113 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.457936 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458373 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458395 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjz9\" (UniqueName: \"kubernetes.io/projected/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-kube-api-access-4cjz9\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458471 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458523 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.458002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.466506 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.467551 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.468578 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.472074 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.476230 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjz9\" (UniqueName: \"kubernetes.io/projected/aabf5c90-5a50-4950-a417-ddf73a2fe2ce-kube-api-access-4cjz9\") pod \"cinder-scheduler-0\" (UID: \"aabf5c90-5a50-4950-a417-ddf73a2fe2ce\") " pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.723391 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.751331 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c549b7ce-615d-467b-8e6f-4387a0d49e28" path="/var/lib/kubelet/pods/c549b7ce-615d-467b-8e6f-4387a0d49e28/volumes" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.829116 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.150:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.898495 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1"} Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.924064 4827 generic.go:334] "Generic (PLEG): container finished" podID="73209569-8a53-46a7-a420-4864c674bc82" containerID="6d12d642ac5ab0d430051ee70a591a7a7426e99a24231cad89c6e7d47149f57b" exitCode=0 Jan 26 09:25:43 crc kubenswrapper[4827]: I0126 09:25:43.924914 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerDied","Data":"6d12d642ac5ab0d430051ee70a591a7a7426e99a24231cad89c6e7d47149f57b"} Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.314242 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.389647 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.479416 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.479978 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.480121 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.480211 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.480297 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jhf\" (UniqueName: \"kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.480408 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.480533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config\") pod \"73209569-8a53-46a7-a420-4864c674bc82\" (UID: \"73209569-8a53-46a7-a420-4864c674bc82\") " Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.521487 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf" (OuterVolumeSpecName: "kube-api-access-h4jhf") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "kube-api-access-h4jhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.521923 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.587604 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.587644 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jhf\" (UniqueName: \"kubernetes.io/projected/73209569-8a53-46a7-a420-4864c674bc82-kube-api-access-h4jhf\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.605931 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.623134 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config" (OuterVolumeSpecName: "config") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.634782 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.663872 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.673967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73209569-8a53-46a7-a420-4864c674bc82" (UID: "73209569-8a53-46a7-a420-4864c674bc82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.692888 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.692929 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.692942 4827 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.692955 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.692968 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/73209569-8a53-46a7-a420-4864c674bc82-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.940394 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67459d4777-j9nd4" event={"ID":"73209569-8a53-46a7-a420-4864c674bc82","Type":"ContainerDied","Data":"68aeca83f757f3d5fc90d5aead781e9d51ab328f4b277b41b0bf4f56334ef1f8"} Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.940440 4827 scope.go:117] "RemoveContainer" containerID="433ab8f3e1f6c1f20eeefadd26526c3c5d5afd5fd6b59db9adea299ab847a45c" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.940536 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67459d4777-j9nd4" Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.960279 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aabf5c90-5a50-4950-a417-ddf73a2fe2ce","Type":"ContainerStarted","Data":"4dc3eba9d544028c3ec7ddc6558f9e85b5fada6992ce816f01311f86d62387a5"} Jan 26 09:25:44 crc kubenswrapper[4827]: I0126 09:25:44.985765 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:45 crc kubenswrapper[4827]: I0126 09:25:45.005279 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67459d4777-j9nd4"] Jan 26 09:25:45 crc kubenswrapper[4827]: I0126 09:25:45.012036 4827 scope.go:117] "RemoveContainer" containerID="6d12d642ac5ab0d430051ee70a591a7a7426e99a24231cad89c6e7d47149f57b" Jan 26 09:25:45 crc kubenswrapper[4827]: I0126 09:25:45.715423 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73209569-8a53-46a7-a420-4864c674bc82" path="/var/lib/kubelet/pods/73209569-8a53-46a7-a420-4864c674bc82/volumes" Jan 26 09:25:45 crc kubenswrapper[4827]: I0126 09:25:45.970204 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aabf5c90-5a50-4950-a417-ddf73a2fe2ce","Type":"ContainerStarted","Data":"14c97a36291f279cadb461d1369cc35cdf2064e907b5d1dd01975ddb4d8a70f9"} Jan 26 09:25:46 crc kubenswrapper[4827]: I0126 09:25:46.979334 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aabf5c90-5a50-4950-a417-ddf73a2fe2ce","Type":"ContainerStarted","Data":"4c2cdd3230bc0aac0f6ddb95ae4d6556027a12b1bd13f57cc046db8ba38fbb55"} Jan 26 09:25:47 crc kubenswrapper[4827]: I0126 09:25:47.731729 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:47 crc kubenswrapper[4827]: I0126 09:25:47.770447 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.770429212 podStartE2EDuration="4.770429212s" podCreationTimestamp="2026-01-26 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:25:46.997216668 +0000 UTC m=+1175.645888477" watchObservedRunningTime="2026-01-26 09:25:47.770429212 +0000 UTC m=+1176.419101021" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.298972 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.494262 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-bb549d74c-6hlgt" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.702448 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-549f46df88-ldq7r" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.725240 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.776824 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.777758 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api-log" containerID="cri-o://27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b" gracePeriod=30 Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.777938 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api" containerID="cri-o://1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637" gracePeriod=30 Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.812311 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Jan 26 09:25:48 crc kubenswrapper[4827]: I0126 09:25:48.812819 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Jan 26 09:25:50 crc kubenswrapper[4827]: I0126 09:25:50.002408 4827 generic.go:334] "Generic (PLEG): container finished" podID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerID="27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b" exitCode=143 Jan 26 09:25:50 crc kubenswrapper[4827]: I0126 09:25:50.002446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerDied","Data":"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b"} Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.459927 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 09:25:51 crc kubenswrapper[4827]: E0126 09:25:51.460594 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.460609 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" Jan 26 09:25:51 crc kubenswrapper[4827]: E0126 09:25:51.460654 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-api" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.460660 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-api" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.460824 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-httpd" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.460836 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="73209569-8a53-46a7-a420-4864c674bc82" containerName="neutron-api" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.461404 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.463502 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.463502 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.470438 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vrvrj" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.495596 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.539833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config-secret\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.540174 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.540196 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bgm7\" (UniqueName: \"kubernetes.io/projected/a74b1cb5-e36a-49d0-b075-f3f269487645-kube-api-access-5bgm7\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.540306 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.642258 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.642321 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bgm7\" (UniqueName: \"kubernetes.io/projected/a74b1cb5-e36a-49d0-b075-f3f269487645-kube-api-access-5bgm7\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.642446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.642508 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config-secret\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.643185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.647612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.647802 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a74b1cb5-e36a-49d0-b075-f3f269487645-openstack-config-secret\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.670075 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bgm7\" (UniqueName: \"kubernetes.io/projected/a74b1cb5-e36a-49d0-b075-f3f269487645-kube-api-access-5bgm7\") pod \"openstackclient\" (UID: \"a74b1cb5-e36a-49d0-b075-f3f269487645\") " pod="openstack/openstackclient" Jan 26 09:25:51 crc kubenswrapper[4827]: I0126 09:25:51.778747 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.265291 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.395914 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.457909 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7dbg\" (UniqueName: \"kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg\") pod \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.458050 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs\") pod \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.458092 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle\") pod \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.458124 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data\") pod \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.458153 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom\") pod \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\" (UID: \"f6020e09-dbe0-4b59-9b60-895590ba8d0e\") " Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.459320 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs" (OuterVolumeSpecName: "logs") pod "f6020e09-dbe0-4b59-9b60-895590ba8d0e" (UID: "f6020e09-dbe0-4b59-9b60-895590ba8d0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.463525 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg" (OuterVolumeSpecName: "kube-api-access-v7dbg") pod "f6020e09-dbe0-4b59-9b60-895590ba8d0e" (UID: "f6020e09-dbe0-4b59-9b60-895590ba8d0e"). InnerVolumeSpecName "kube-api-access-v7dbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.464889 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6020e09-dbe0-4b59-9b60-895590ba8d0e" (UID: "f6020e09-dbe0-4b59-9b60-895590ba8d0e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.486990 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6020e09-dbe0-4b59-9b60-895590ba8d0e" (UID: "f6020e09-dbe0-4b59-9b60-895590ba8d0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.509834 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data" (OuterVolumeSpecName: "config-data") pod "f6020e09-dbe0-4b59-9b60-895590ba8d0e" (UID: "f6020e09-dbe0-4b59-9b60-895590ba8d0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.561417 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6020e09-dbe0-4b59-9b60-895590ba8d0e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.561455 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.561468 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.561479 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6020e09-dbe0-4b59-9b60-895590ba8d0e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:52 crc kubenswrapper[4827]: I0126 09:25:52.561490 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7dbg\" (UniqueName: \"kubernetes.io/projected/f6020e09-dbe0-4b59-9b60-895590ba8d0e-kube-api-access-v7dbg\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.046017 4827 generic.go:334] "Generic (PLEG): container finished" podID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerID="1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637" exitCode=0 Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.046064 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.046087 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerDied","Data":"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637"} Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.046429 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d6d9c5fbd-b4nwv" event={"ID":"f6020e09-dbe0-4b59-9b60-895590ba8d0e","Type":"ContainerDied","Data":"1d43777d0ca931b90b47dbe0bdc5d39cf1c486b38272b51f516ebfaa865121fb"} Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.046457 4827 scope.go:117] "RemoveContainer" containerID="1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.047715 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a74b1cb5-e36a-49d0-b075-f3f269487645","Type":"ContainerStarted","Data":"1914b23f78ff95a18a97ee89569f3c3ee25cb41cbc25598adeafb5fb210703b5"} Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.070595 4827 scope.go:117] "RemoveContainer" containerID="27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.090782 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.098761 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d6d9c5fbd-b4nwv"] Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.102409 4827 scope.go:117] "RemoveContainer" containerID="1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637" Jan 26 09:25:53 crc kubenswrapper[4827]: E0126 09:25:53.102955 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637\": container with ID starting with 1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637 not found: ID does not exist" containerID="1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.102993 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637"} err="failed to get container status \"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637\": rpc error: code = NotFound desc = could not find container \"1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637\": container with ID starting with 1f551d43c940a1eb612c039d04f4101abbb90bbdb1aaf7cf034e6519f67ec637 not found: ID does not exist" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.103014 4827 scope.go:117] "RemoveContainer" containerID="27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b" Jan 26 09:25:53 crc kubenswrapper[4827]: E0126 09:25:53.103232 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b\": container with ID starting with 27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b not found: ID does not exist" containerID="27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.103252 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b"} err="failed to get container status \"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b\": rpc error: code = NotFound desc = could not find container \"27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b\": container with ID starting with 27b1ad911ead16487586c2c432862ee2c9a762920fcc9959c95ae5dc0238da2b not found: ID does not exist" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.714285 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" path="/var/lib/kubelet/pods/f6020e09-dbe0-4b59-9b60-895590ba8d0e/volumes" Jan 26 09:25:53 crc kubenswrapper[4827]: I0126 09:25:53.958386 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 09:25:58 crc kubenswrapper[4827]: I0126 09:25:58.371191 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:58 crc kubenswrapper[4827]: I0126 09:25:58.405153 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84fd67f47d-vt6sw" Jan 26 09:25:59 crc kubenswrapper[4827]: I0126 09:25:59.219885 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 09:26:02 crc kubenswrapper[4827]: I0126 09:26:02.145659 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:02 crc kubenswrapper[4827]: I0126 09:26:02.146165 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerName="kube-state-metrics" containerID="cri-o://192ff5c8aef6a340f8ec3a7bd2ac54de09cb4ed39e95776068a4dca13bc78ef6" gracePeriod=30 Jan 26 09:26:02 crc kubenswrapper[4827]: I0126 09:26:02.351878 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.161755 4827 generic.go:334] "Generic (PLEG): container finished" podID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerID="192ff5c8aef6a340f8ec3a7bd2ac54de09cb4ed39e95776068a4dca13bc78ef6" exitCode=2 Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.161836 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05543fb3-7874-4393-a8da-c3f6f7e65029","Type":"ContainerDied","Data":"192ff5c8aef6a340f8ec3a7bd2ac54de09cb4ed39e95776068a4dca13bc78ef6"} Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.245707 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.246152 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="proxy-httpd" containerID="cri-o://2f667ca041264aeef9e087fa2afc8764eee1b165548eccd01d70764c550b2019" gracePeriod=30 Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.246361 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="sg-core" containerID="cri-o://01966f16350da6d781059ed993a6700820f908199480f42fce076dcc9e3befc8" gracePeriod=30 Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.246353 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-central-agent" containerID="cri-o://a48c616a853fbeb159e59853e8e14787aa07745dc93535de57429b57065b92fd" gracePeriod=30 Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.246416 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-notification-agent" containerID="cri-o://9bbee3a7a8f3442a237cbd2448763f1d734a2969275dc04a69572f721c8062c0" gracePeriod=30 Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.776711 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.907325 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5pk\" (UniqueName: \"kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk\") pod \"05543fb3-7874-4393-a8da-c3f6f7e65029\" (UID: \"05543fb3-7874-4393-a8da-c3f6f7e65029\") " Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.912419 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk" (OuterVolumeSpecName: "kube-api-access-mj5pk") pod "05543fb3-7874-4393-a8da-c3f6f7e65029" (UID: "05543fb3-7874-4393-a8da-c3f6f7e65029"). InnerVolumeSpecName "kube-api-access-mj5pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:03 crc kubenswrapper[4827]: I0126 09:26:03.928038 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009363 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009466 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009507 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009571 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009602 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009620 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q866\" (UniqueName: \"kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.009678 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom\") pod \"25d94e18-5c09-4459-b330-861d46795409\" (UID: \"25d94e18-5c09-4459-b330-861d46795409\") " Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.010042 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5pk\" (UniqueName: \"kubernetes.io/projected/05543fb3-7874-4393-a8da-c3f6f7e65029-kube-api-access-mj5pk\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.010617 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.011495 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs" (OuterVolumeSpecName: "logs") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.016228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts" (OuterVolumeSpecName: "scripts") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.017211 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866" (OuterVolumeSpecName: "kube-api-access-6q866") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "kube-api-access-6q866". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.019887 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.055793 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.071254 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data" (OuterVolumeSpecName: "config-data") pod "25d94e18-5c09-4459-b330-861d46795409" (UID: "25d94e18-5c09-4459-b330-861d46795409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.111930 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25d94e18-5c09-4459-b330-861d46795409-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.111965 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.111974 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.111983 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d94e18-5c09-4459-b330-861d46795409-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.111992 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.112000 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q866\" (UniqueName: \"kubernetes.io/projected/25d94e18-5c09-4459-b330-861d46795409-kube-api-access-6q866\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.112010 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d94e18-5c09-4459-b330-861d46795409-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172118 4827 generic.go:334] "Generic (PLEG): container finished" podID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerID="2f667ca041264aeef9e087fa2afc8764eee1b165548eccd01d70764c550b2019" exitCode=0 Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172145 4827 generic.go:334] "Generic (PLEG): container finished" podID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerID="01966f16350da6d781059ed993a6700820f908199480f42fce076dcc9e3befc8" exitCode=2 Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172155 4827 generic.go:334] "Generic (PLEG): container finished" podID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerID="a48c616a853fbeb159e59853e8e14787aa07745dc93535de57429b57065b92fd" exitCode=0 Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172186 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerDied","Data":"2f667ca041264aeef9e087fa2afc8764eee1b165548eccd01d70764c550b2019"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172210 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerDied","Data":"01966f16350da6d781059ed993a6700820f908199480f42fce076dcc9e3befc8"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.172220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerDied","Data":"a48c616a853fbeb159e59853e8e14787aa07745dc93535de57429b57065b92fd"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.175727 4827 generic.go:334] "Generic (PLEG): container finished" podID="25d94e18-5c09-4459-b330-861d46795409" containerID="546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417" exitCode=137 Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.175793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerDied","Data":"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.175821 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25d94e18-5c09-4459-b330-861d46795409","Type":"ContainerDied","Data":"f0b955b5aac44ac7073fd80a8dbf7294f1e98fe22c9f300f880e0220ea2a6342"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.175837 4827 scope.go:117] "RemoveContainer" containerID="546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.175935 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.181711 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a74b1cb5-e36a-49d0-b075-f3f269487645","Type":"ContainerStarted","Data":"9e6eac2ec24f0a86e9ed0872da73004003a932fc02b1d5f613c919d072091531"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.184049 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05543fb3-7874-4393-a8da-c3f6f7e65029","Type":"ContainerDied","Data":"3e19f7281021121ed84918f0d88d5e1efbdc232a6ded704be050ae3f41468966"} Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.184096 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.208951 4827 scope.go:117] "RemoveContainer" containerID="b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.227043 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.991045351 podStartE2EDuration="13.227021971s" podCreationTimestamp="2026-01-26 09:25:51 +0000 UTC" firstStartedPulling="2026-01-26 09:25:52.272078621 +0000 UTC m=+1180.920750440" lastFinishedPulling="2026-01-26 09:26:03.508055241 +0000 UTC m=+1192.156727060" observedRunningTime="2026-01-26 09:26:04.219125201 +0000 UTC m=+1192.867797020" watchObservedRunningTime="2026-01-26 09:26:04.227021971 +0000 UTC m=+1192.875693800" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.250180 4827 scope.go:117] "RemoveContainer" containerID="546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.250525 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417\": container with ID starting with 546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417 not found: ID does not exist" containerID="546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.250552 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417"} err="failed to get container status \"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417\": rpc error: code = NotFound desc = could not find container \"546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417\": container with ID starting with 546acb91b76158784fc803effd678f66caa55456e62686097027f8f1dadd3417 not found: ID does not exist" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.250571 4827 scope.go:117] "RemoveContainer" containerID="b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.250803 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757\": container with ID starting with b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757 not found: ID does not exist" containerID="b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.250822 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757"} err="failed to get container status \"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757\": rpc error: code = NotFound desc = could not find container \"b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757\": container with ID starting with b25be033d46c78d4a750a6cfbe4d6cf5febdd24070fe2035554c64235954b757 not found: ID does not exist" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.250834 4827 scope.go:117] "RemoveContainer" containerID="192ff5c8aef6a340f8ec3a7bd2ac54de09cb4ed39e95776068a4dca13bc78ef6" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.274157 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.294409 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.318144 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336210 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.336706 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336779 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.336790 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336795 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.336818 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336825 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.336838 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336845 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api" Jan 26 09:26:04 crc kubenswrapper[4827]: E0126 09:26:04.336854 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerName="kube-state-metrics" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.336860 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerName="kube-state-metrics" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338030 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338050 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338063 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6020e09-dbe0-4b59-9b60-895590ba8d0e" containerName="barbican-api" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338074 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api-log" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338094 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" containerName="kube-state-metrics" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.338932 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.343612 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.343811 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.343956 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.353438 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.371102 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.388535 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.390223 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.393329 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.393514 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.401221 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436700 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqm7g\" (UniqueName: \"kubernetes.io/projected/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-kube-api-access-kqm7g\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436754 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436776 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-logs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436804 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436829 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436857 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436897 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-scripts\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.436928 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.437020 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.538570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.538771 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.538904 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqm7g\" (UniqueName: \"kubernetes.io/projected/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-kube-api-access-kqm7g\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.538946 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.539032 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-logs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.539109 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.539506 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-logs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.539586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.539727 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540193 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540284 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-scripts\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540345 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540374 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540402 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2lt\" (UniqueName: \"kubernetes.io/projected/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-api-access-8s2lt\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.540437 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.543312 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.543841 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.544448 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.544892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-scripts\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.545547 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.558854 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-config-data\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.560909 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqm7g\" (UniqueName: \"kubernetes.io/projected/07c0b8ae-368e-4e51-8686-6d5ce6def2a9-kube-api-access-kqm7g\") pod \"cinder-api-0\" (UID: \"07c0b8ae-368e-4e51-8686-6d5ce6def2a9\") " pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.642451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.642509 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2lt\" (UniqueName: \"kubernetes.io/projected/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-api-access-8s2lt\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.642533 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.642602 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.647609 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.648085 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.648273 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d36e60e-5a78-4ce6-8997-688333022bc0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.662167 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2lt\" (UniqueName: \"kubernetes.io/projected/9d36e60e-5a78-4ce6-8997-688333022bc0-kube-api-access-8s2lt\") pod \"kube-state-metrics-0\" (UID: \"9d36e60e-5a78-4ce6-8997-688333022bc0\") " pod="openstack/kube-state-metrics-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.665023 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 09:26:04 crc kubenswrapper[4827]: I0126 09:26:04.773967 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 09:26:05 crc kubenswrapper[4827]: W0126 09:26:05.198909 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c0b8ae_368e_4e51_8686_6d5ce6def2a9.slice/crio-ad846e8c4c0d1a44f7e6f56d1c350dca02a824f71885061c0567968e9be59653 WatchSource:0}: Error finding container ad846e8c4c0d1a44f7e6f56d1c350dca02a824f71885061c0567968e9be59653: Status 404 returned error can't find the container with id ad846e8c4c0d1a44f7e6f56d1c350dca02a824f71885061c0567968e9be59653 Jan 26 09:26:05 crc kubenswrapper[4827]: I0126 09:26:05.207693 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 09:26:05 crc kubenswrapper[4827]: I0126 09:26:05.329022 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 09:26:05 crc kubenswrapper[4827]: I0126 09:26:05.745851 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05543fb3-7874-4393-a8da-c3f6f7e65029" path="/var/lib/kubelet/pods/05543fb3-7874-4393-a8da-c3f6f7e65029/volumes" Jan 26 09:26:05 crc kubenswrapper[4827]: I0126 09:26:05.746501 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d94e18-5c09-4459-b330-861d46795409" path="/var/lib/kubelet/pods/25d94e18-5c09-4459-b330-861d46795409/volumes" Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.236359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d36e60e-5a78-4ce6-8997-688333022bc0","Type":"ContainerStarted","Data":"6470508a5d7feb8830ec6e9daa6f455f431860c8e8f5bfeb73a5b7ffd4d327d1"} Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.236984 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d36e60e-5a78-4ce6-8997-688333022bc0","Type":"ContainerStarted","Data":"e09c5cc4388386b2b2d76cdf8fc70bbacca9fe6d1b80d92be168a26396364f0a"} Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.238562 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.244393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07c0b8ae-368e-4e51-8686-6d5ce6def2a9","Type":"ContainerStarted","Data":"7c112947239738e0313e03abdc17ee1f72ed2b2ee90a30d5b8ea1ab9c5d8c4da"} Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.244430 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07c0b8ae-368e-4e51-8686-6d5ce6def2a9","Type":"ContainerStarted","Data":"ad846e8c4c0d1a44f7e6f56d1c350dca02a824f71885061c0567968e9be59653"} Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.256049 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bdc4699d9-tnd4c" Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.259356 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.864369103 podStartE2EDuration="2.259337089s" podCreationTimestamp="2026-01-26 09:26:04 +0000 UTC" firstStartedPulling="2026-01-26 09:26:05.334480395 +0000 UTC m=+1193.983152214" lastFinishedPulling="2026-01-26 09:26:05.729448381 +0000 UTC m=+1194.378120200" observedRunningTime="2026-01-26 09:26:06.257011177 +0000 UTC m=+1194.905682996" watchObservedRunningTime="2026-01-26 09:26:06.259337089 +0000 UTC m=+1194.908008908" Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.319407 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.319807 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-669c664556-xn8st" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-httpd" containerID="cri-o://56cab7b8704b3e651f9d40f0e0b77cbf71a4a53420baec52c47578fcafd29b84" gracePeriod=30 Jan 26 09:26:06 crc kubenswrapper[4827]: I0126 09:26:06.319629 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-669c664556-xn8st" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-api" containerID="cri-o://f673a28c07db6e9b4fffa5321a6f782d479a60c08355ccffbe919ebd9373a64a" gracePeriod=30 Jan 26 09:26:07 crc kubenswrapper[4827]: I0126 09:26:07.253811 4827 generic.go:334] "Generic (PLEG): container finished" podID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerID="56cab7b8704b3e651f9d40f0e0b77cbf71a4a53420baec52c47578fcafd29b84" exitCode=0 Jan 26 09:26:07 crc kubenswrapper[4827]: I0126 09:26:07.253949 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerDied","Data":"56cab7b8704b3e651f9d40f0e0b77cbf71a4a53420baec52c47578fcafd29b84"} Jan 26 09:26:07 crc kubenswrapper[4827]: I0126 09:26:07.257401 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07c0b8ae-368e-4e51-8686-6d5ce6def2a9","Type":"ContainerStarted","Data":"79097ef17ea966c6713b55259b818b766b07a2bd6efb6c2603136c9c600625c2"} Jan 26 09:26:07 crc kubenswrapper[4827]: I0126 09:26:07.257452 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 09:26:08 crc kubenswrapper[4827]: I0126 09:26:08.264703 4827 generic.go:334] "Generic (PLEG): container finished" podID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerID="9bbee3a7a8f3442a237cbd2448763f1d734a2969275dc04a69572f721c8062c0" exitCode=0 Jan 26 09:26:08 crc kubenswrapper[4827]: I0126 09:26:08.266431 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerDied","Data":"9bbee3a7a8f3442a237cbd2448763f1d734a2969275dc04a69572f721c8062c0"} Jan 26 09:26:08 crc kubenswrapper[4827]: I0126 09:26:08.774960 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="25d94e18-5c09-4459-b330-861d46795409" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.150:8776/healthcheck\": dial tcp 10.217.0.150:8776: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.717553 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.744705 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.744683444 podStartE2EDuration="5.744683444s" podCreationTimestamp="2026-01-26 09:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:26:07.280972965 +0000 UTC m=+1195.929644784" watchObservedRunningTime="2026-01-26 09:26:09.744683444 +0000 UTC m=+1198.393355273" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854084 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854172 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crdc4\" (UniqueName: \"kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854333 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854360 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.854461 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts\") pod \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\" (UID: \"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb\") " Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.855145 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.887542 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4" (OuterVolumeSpecName: "kube-api-access-crdc4") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "kube-api-access-crdc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.887685 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.889794 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts" (OuterVolumeSpecName: "scripts") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.919784 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.956803 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crdc4\" (UniqueName: \"kubernetes.io/projected/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-kube-api-access-crdc4\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.957140 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.957156 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.957170 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.957182 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:09 crc kubenswrapper[4827]: I0126 09:26:09.961418 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.022777 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data" (OuterVolumeSpecName: "config-data") pod "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" (UID: "fd79e837-4610-45ba-b2b9-ee7f3e8d52eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.058663 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.058850 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.284047 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd79e837-4610-45ba-b2b9-ee7f3e8d52eb","Type":"ContainerDied","Data":"cd910e2200b4663058be37a9e9d559970805597863ceee3e09ad9ac0ce42c088"} Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.284102 4827 scope.go:117] "RemoveContainer" containerID="2f667ca041264aeef9e087fa2afc8764eee1b165548eccd01d70764c550b2019" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.284129 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.318063 4827 scope.go:117] "RemoveContainer" containerID="01966f16350da6d781059ed993a6700820f908199480f42fce076dcc9e3befc8" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.328871 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.339139 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.359994 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:10 crc kubenswrapper[4827]: E0126 09:26:10.360378 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="sg-core" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360395 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="sg-core" Jan 26 09:26:10 crc kubenswrapper[4827]: E0126 09:26:10.360414 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-notification-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360420 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-notification-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: E0126 09:26:10.360433 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-central-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360439 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-central-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: E0126 09:26:10.360457 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="proxy-httpd" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360463 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="proxy-httpd" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360624 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="sg-core" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360650 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-notification-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360663 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="ceilometer-central-agent" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.360673 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" containerName="proxy-httpd" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.362032 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.368077 4827 scope.go:117] "RemoveContainer" containerID="9bbee3a7a8f3442a237cbd2448763f1d734a2969275dc04a69572f721c8062c0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.369711 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.373866 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.374045 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.382119 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.448723 4827 scope.go:117] "RemoveContainer" containerID="a48c616a853fbeb159e59853e8e14787aa07745dc93535de57429b57065b92fd" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.465889 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.465926 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.465978 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.465994 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.466042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcjdd\" (UniqueName: \"kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.466071 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.466088 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.466122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568218 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568295 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568325 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568389 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568413 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568476 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcjdd\" (UniqueName: \"kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568516 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568539 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.568815 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.569088 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.573200 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.573309 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.573361 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.576552 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.577423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.591234 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcjdd\" (UniqueName: \"kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd\") pod \"ceilometer-0\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " pod="openstack/ceilometer-0" Jan 26 09:26:10 crc kubenswrapper[4827]: I0126 09:26:10.687490 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:11 crc kubenswrapper[4827]: I0126 09:26:11.211692 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:11 crc kubenswrapper[4827]: I0126 09:26:11.291550 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerStarted","Data":"baf7eaec2c51c211baf4fad89bbd986496775232166b6f9c4eac09ab9e914a03"} Jan 26 09:26:11 crc kubenswrapper[4827]: I0126 09:26:11.724014 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd79e837-4610-45ba-b2b9-ee7f3e8d52eb" path="/var/lib/kubelet/pods/fd79e837-4610-45ba-b2b9-ee7f3e8d52eb/volumes" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.313010 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerStarted","Data":"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176"} Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.315507 4827 generic.go:334] "Generic (PLEG): container finished" podID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerID="f673a28c07db6e9b4fffa5321a6f782d479a60c08355ccffbe919ebd9373a64a" exitCode=0 Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.315615 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerDied","Data":"f673a28c07db6e9b4fffa5321a6f782d479a60c08355ccffbe919ebd9373a64a"} Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.315661 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669c664556-xn8st" event={"ID":"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73","Type":"ContainerDied","Data":"8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475"} Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.315675 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f9d9f3ea32a701938c601801dee49f77393c8a7d50ebca5ecc9b321ba82f475" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.334222 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.404090 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs\") pod \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.404148 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config\") pod \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.404190 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle\") pod \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.404225 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fnwm\" (UniqueName: \"kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm\") pod \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.404263 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config\") pod \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\" (UID: \"1ff2c416-a1be-4c3b-a73e-8c779a9cfb73\") " Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.426938 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm" (OuterVolumeSpecName: "kube-api-access-5fnwm") pod "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" (UID: "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73"). InnerVolumeSpecName "kube-api-access-5fnwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.431919 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" (UID: "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.506260 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fnwm\" (UniqueName: \"kubernetes.io/projected/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-kube-api-access-5fnwm\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.508510 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.508908 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config" (OuterVolumeSpecName: "config") pod "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" (UID: "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.530615 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" (UID: "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.574681 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" (UID: "1ff2c416-a1be-4c3b-a73e-8c779a9cfb73"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.609902 4827 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.609938 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:12 crc kubenswrapper[4827]: I0126 09:26:12.609952 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:13 crc kubenswrapper[4827]: I0126 09:26:13.325361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669c664556-xn8st" Jan 26 09:26:13 crc kubenswrapper[4827]: I0126 09:26:13.328696 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerStarted","Data":"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3"} Jan 26 09:26:13 crc kubenswrapper[4827]: I0126 09:26:13.393659 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:26:13 crc kubenswrapper[4827]: I0126 09:26:13.407612 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-669c664556-xn8st"] Jan 26 09:26:13 crc kubenswrapper[4827]: I0126 09:26:13.713518 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" path="/var/lib/kubelet/pods/1ff2c416-a1be-4c3b-a73e-8c779a9cfb73/volumes" Jan 26 09:26:14 crc kubenswrapper[4827]: I0126 09:26:14.125306 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:14 crc kubenswrapper[4827]: I0126 09:26:14.334890 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerStarted","Data":"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54"} Jan 26 09:26:14 crc kubenswrapper[4827]: I0126 09:26:14.788254 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352383 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerStarted","Data":"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf"} Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352577 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-central-agent" containerID="cri-o://41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176" gracePeriod=30 Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352588 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352728 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-notification-agent" containerID="cri-o://cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3" gracePeriod=30 Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352751 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="proxy-httpd" containerID="cri-o://024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf" gracePeriod=30 Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.352733 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="sg-core" containerID="cri-o://c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54" gracePeriod=30 Jan 26 09:26:15 crc kubenswrapper[4827]: I0126 09:26:15.385228 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.978791467 podStartE2EDuration="5.385211709s" podCreationTimestamp="2026-01-26 09:26:10 +0000 UTC" firstStartedPulling="2026-01-26 09:26:11.220701127 +0000 UTC m=+1199.869372946" lastFinishedPulling="2026-01-26 09:26:14.627121369 +0000 UTC m=+1203.275793188" observedRunningTime="2026-01-26 09:26:15.380851882 +0000 UTC m=+1204.029523721" watchObservedRunningTime="2026-01-26 09:26:15.385211709 +0000 UTC m=+1204.033883528" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362516 4827 generic.go:334] "Generic (PLEG): container finished" podID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerID="024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf" exitCode=0 Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362785 4827 generic.go:334] "Generic (PLEG): container finished" podID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerID="c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54" exitCode=2 Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362795 4827 generic.go:334] "Generic (PLEG): container finished" podID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerID="cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3" exitCode=0 Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362558 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerDied","Data":"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf"} Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362827 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerDied","Data":"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54"} Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.362842 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerDied","Data":"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3"} Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.517240 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fk7m4"] Jan 26 09:26:16 crc kubenswrapper[4827]: E0126 09:26:16.517576 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-httpd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.517592 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-httpd" Jan 26 09:26:16 crc kubenswrapper[4827]: E0126 09:26:16.517604 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-api" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.517610 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-api" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.517774 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-httpd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.517793 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff2c416-a1be-4c3b-a73e-8c779a9cfb73" containerName="neutron-api" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.518441 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.528287 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fk7m4"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.583093 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.583151 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjqq\" (UniqueName: \"kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.618678 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vm454"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.619922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.644881 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vm454"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.658015 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b376-account-create-update-tqdsd"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.659087 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.661045 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.678985 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b376-account-create-update-tqdsd"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.684769 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtqx\" (UniqueName: \"kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.684849 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq55v\" (UniqueName: \"kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.684920 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.684939 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjqq\" (UniqueName: \"kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.684971 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.685005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.685820 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.715082 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjqq\" (UniqueName: \"kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq\") pod \"nova-api-db-create-fk7m4\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.786500 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq55v\" (UniqueName: \"kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.786923 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.786972 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.787025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtqx\" (UniqueName: \"kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.787920 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.788184 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.803020 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq55v\" (UniqueName: \"kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v\") pod \"nova-api-b376-account-create-update-tqdsd\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.813179 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtqx\" (UniqueName: \"kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx\") pod \"nova-cell0-db-create-vm454\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.841403 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6r5mx"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.842676 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.847096 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.856736 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6r5mx"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.867539 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f56d-account-create-update-74r6s"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.869309 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.875876 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.889190 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s7b\" (UniqueName: \"kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.889273 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.889354 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qd4\" (UniqueName: \"kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.889390 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.897280 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f56d-account-create-update-74r6s"] Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.934550 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.979952 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.994702 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qd4\" (UniqueName: \"kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.994759 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.994828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s7b\" (UniqueName: \"kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.994861 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.995631 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:16 crc kubenswrapper[4827]: I0126 09:26:16.996439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.042315 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qd4\" (UniqueName: \"kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4\") pod \"nova-cell1-db-create-6r5mx\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.046028 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s7b\" (UniqueName: \"kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b\") pod \"nova-cell0-f56d-account-create-update-74r6s\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.083818 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7bf0-account-create-update-qxgg2"] Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.084834 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.087148 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.106894 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7bf0-account-create-update-qxgg2"] Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.200553 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.200671 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7nr\" (UniqueName: \"kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.244957 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.251026 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.301866 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7nr\" (UniqueName: \"kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.301982 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.302778 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.315768 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.338857 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7nr\" (UniqueName: \"kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr\") pod \"nova-cell1-7bf0-account-create-update-qxgg2\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.420667 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.576233 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vm454"] Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.598122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fk7m4"] Jan 26 09:26:17 crc kubenswrapper[4827]: W0126 09:26:17.614326 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869917b6_85c2_45fe_8bb2_1bb1bebed474.slice/crio-6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6 WatchSource:0}: Error finding container 6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6: Status 404 returned error can't find the container with id 6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6 Jan 26 09:26:17 crc kubenswrapper[4827]: I0126 09:26:17.972536 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b376-account-create-update-tqdsd"] Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.017923 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6r5mx"] Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.168709 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f56d-account-create-update-74r6s"] Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.186741 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7bf0-account-create-update-qxgg2"] Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.390724 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r5mx" event={"ID":"31de30bf-1b49-4768-8f78-7c3812d1f6f9","Type":"ContainerStarted","Data":"7eda53e307649f59fae9626e39b8f916a67a160ec3d0a2fa88f78a25d743db4e"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.390772 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r5mx" event={"ID":"31de30bf-1b49-4768-8f78-7c3812d1f6f9","Type":"ContainerStarted","Data":"9450e486b016763a288240a7977c0f2db83db42f8580ca6c09e4a306773e58a1"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.392269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" event={"ID":"891a9e82-f3d1-42d5-81c3-9d397421322a","Type":"ContainerStarted","Data":"ba2d6ba59e0fa669a88983b796a5fc7d22fd8b4e733ed2c593d45a333b34d627"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.393446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" event={"ID":"6510fbc8-65b5-4783-8992-66f4b0899cef","Type":"ContainerStarted","Data":"ce795e06456634a9394843df7a3343ab72e7872e1e2e3cd37bb9be34f991cf90"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.395343 4827 generic.go:334] "Generic (PLEG): container finished" podID="d0b6bbd8-6b37-497c-a3e4-e576808ad689" containerID="e9d1f1368f2195858fcc84b4260fe68cec52093b8a0f62cec161da80ac3ef9f8" exitCode=0 Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.395413 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fk7m4" event={"ID":"d0b6bbd8-6b37-497c-a3e4-e576808ad689","Type":"ContainerDied","Data":"e9d1f1368f2195858fcc84b4260fe68cec52093b8a0f62cec161da80ac3ef9f8"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.395434 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fk7m4" event={"ID":"d0b6bbd8-6b37-497c-a3e4-e576808ad689","Type":"ContainerStarted","Data":"ad1d80585824542583b4b77c08b04882df9f840e1734e18bdcde8bb92a4b03eb"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.397724 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b376-account-create-update-tqdsd" event={"ID":"c811b170-ced1-481b-adf7-9167094df800","Type":"ContainerStarted","Data":"d48e0771f890e354685262eb2d234560519a8e19ed3f56050f0a070b6f3ef632"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.397779 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b376-account-create-update-tqdsd" event={"ID":"c811b170-ced1-481b-adf7-9167094df800","Type":"ContainerStarted","Data":"e946753923ab08eab113f548abb2099066f3a90b4e5f67aa6ef9ed5b97b0a804"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.400508 4827 generic.go:334] "Generic (PLEG): container finished" podID="869917b6-85c2-45fe-8bb2-1bb1bebed474" containerID="409a77068fc0b1fafa033820c4ca45cc3d83b4e84b880e4b55b949b1a1a43f55" exitCode=0 Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.400572 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vm454" event={"ID":"869917b6-85c2-45fe-8bb2-1bb1bebed474","Type":"ContainerDied","Data":"409a77068fc0b1fafa033820c4ca45cc3d83b4e84b880e4b55b949b1a1a43f55"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.400590 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vm454" event={"ID":"869917b6-85c2-45fe-8bb2-1bb1bebed474","Type":"ContainerStarted","Data":"6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6"} Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.418685 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6r5mx" podStartSLOduration=2.418665173 podStartE2EDuration="2.418665173s" podCreationTimestamp="2026-01-26 09:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:26:18.407234497 +0000 UTC m=+1207.055906336" watchObservedRunningTime="2026-01-26 09:26:18.418665173 +0000 UTC m=+1207.067337002" Jan 26 09:26:18 crc kubenswrapper[4827]: I0126 09:26:18.467557 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b376-account-create-update-tqdsd" podStartSLOduration=2.467537912 podStartE2EDuration="2.467537912s" podCreationTimestamp="2026-01-26 09:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:26:18.463942406 +0000 UTC m=+1207.112614225" watchObservedRunningTime="2026-01-26 09:26:18.467537912 +0000 UTC m=+1207.116209731" Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.411326 4827 generic.go:334] "Generic (PLEG): container finished" podID="31de30bf-1b49-4768-8f78-7c3812d1f6f9" containerID="7eda53e307649f59fae9626e39b8f916a67a160ec3d0a2fa88f78a25d743db4e" exitCode=0 Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.411802 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r5mx" event={"ID":"31de30bf-1b49-4768-8f78-7c3812d1f6f9","Type":"ContainerDied","Data":"7eda53e307649f59fae9626e39b8f916a67a160ec3d0a2fa88f78a25d743db4e"} Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.414061 4827 generic.go:334] "Generic (PLEG): container finished" podID="891a9e82-f3d1-42d5-81c3-9d397421322a" containerID="58269286c65356587e5b8338fd4e53fd847b4318c08a05ad5f4d7500d717eb2e" exitCode=0 Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.414115 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" event={"ID":"891a9e82-f3d1-42d5-81c3-9d397421322a","Type":"ContainerDied","Data":"58269286c65356587e5b8338fd4e53fd847b4318c08a05ad5f4d7500d717eb2e"} Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.416274 4827 generic.go:334] "Generic (PLEG): container finished" podID="6510fbc8-65b5-4783-8992-66f4b0899cef" containerID="23775374a0bfefd753ecaa0d2cf7b6e9903e9066d7bf541cbd87abd9d881fd3d" exitCode=0 Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.416335 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" event={"ID":"6510fbc8-65b5-4783-8992-66f4b0899cef","Type":"ContainerDied","Data":"23775374a0bfefd753ecaa0d2cf7b6e9903e9066d7bf541cbd87abd9d881fd3d"} Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.417889 4827 generic.go:334] "Generic (PLEG): container finished" podID="c811b170-ced1-481b-adf7-9167094df800" containerID="d48e0771f890e354685262eb2d234560519a8e19ed3f56050f0a070b6f3ef632" exitCode=0 Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.417956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b376-account-create-update-tqdsd" event={"ID":"c811b170-ced1-481b-adf7-9167094df800","Type":"ContainerDied","Data":"d48e0771f890e354685262eb2d234560519a8e19ed3f56050f0a070b6f3ef632"} Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.902517 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.909040 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.991694 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsjqq\" (UniqueName: \"kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq\") pod \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.991756 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbtqx\" (UniqueName: \"kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx\") pod \"869917b6-85c2-45fe-8bb2-1bb1bebed474\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.991872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts\") pod \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\" (UID: \"d0b6bbd8-6b37-497c-a3e4-e576808ad689\") " Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.991957 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts\") pod \"869917b6-85c2-45fe-8bb2-1bb1bebed474\" (UID: \"869917b6-85c2-45fe-8bb2-1bb1bebed474\") " Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.993085 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "869917b6-85c2-45fe-8bb2-1bb1bebed474" (UID: "869917b6-85c2-45fe-8bb2-1bb1bebed474"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:19 crc kubenswrapper[4827]: I0126 09:26:19.993892 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0b6bbd8-6b37-497c-a3e4-e576808ad689" (UID: "d0b6bbd8-6b37-497c-a3e4-e576808ad689"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.000310 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx" (OuterVolumeSpecName: "kube-api-access-vbtqx") pod "869917b6-85c2-45fe-8bb2-1bb1bebed474" (UID: "869917b6-85c2-45fe-8bb2-1bb1bebed474"). InnerVolumeSpecName "kube-api-access-vbtqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.000713 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq" (OuterVolumeSpecName: "kube-api-access-zsjqq") pod "d0b6bbd8-6b37-497c-a3e4-e576808ad689" (UID: "d0b6bbd8-6b37-497c-a3e4-e576808ad689"). InnerVolumeSpecName "kube-api-access-zsjqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.093941 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b6bbd8-6b37-497c-a3e4-e576808ad689-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.093985 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869917b6-85c2-45fe-8bb2-1bb1bebed474-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.094001 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsjqq\" (UniqueName: \"kubernetes.io/projected/d0b6bbd8-6b37-497c-a3e4-e576808ad689-kube-api-access-zsjqq\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.094016 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbtqx\" (UniqueName: \"kubernetes.io/projected/869917b6-85c2-45fe-8bb2-1bb1bebed474-kube-api-access-vbtqx\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.432063 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vm454" event={"ID":"869917b6-85c2-45fe-8bb2-1bb1bebed474","Type":"ContainerDied","Data":"6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6"} Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.432108 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee1e73fabcb572d1ea17940e5cf8d688d5434c6e315a85215f564eb991515a6" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.432175 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vm454" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.440508 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fk7m4" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.441347 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fk7m4" event={"ID":"d0b6bbd8-6b37-497c-a3e4-e576808ad689","Type":"ContainerDied","Data":"ad1d80585824542583b4b77c08b04882df9f840e1734e18bdcde8bb92a4b03eb"} Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.441383 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad1d80585824542583b4b77c08b04882df9f840e1734e18bdcde8bb92a4b03eb" Jan 26 09:26:20 crc kubenswrapper[4827]: E0126 09:26:20.631746 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869917b6_85c2_45fe_8bb2_1bb1bebed474.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b6bbd8_6b37_497c_a3e4_e576808ad689.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b6bbd8_6b37_497c_a3e4_e576808ad689.slice/crio-ad1d80585824542583b4b77c08b04882df9f840e1734e18bdcde8bb92a4b03eb\": RecentStats: unable to find data in memory cache]" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.829422 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.913165 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts\") pod \"891a9e82-f3d1-42d5-81c3-9d397421322a\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.914131 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7s7b\" (UniqueName: \"kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b\") pod \"891a9e82-f3d1-42d5-81c3-9d397421322a\" (UID: \"891a9e82-f3d1-42d5-81c3-9d397421322a\") " Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.915751 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "891a9e82-f3d1-42d5-81c3-9d397421322a" (UID: "891a9e82-f3d1-42d5-81c3-9d397421322a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:20 crc kubenswrapper[4827]: I0126 09:26:20.928667 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b" (OuterVolumeSpecName: "kube-api-access-h7s7b") pod "891a9e82-f3d1-42d5-81c3-9d397421322a" (UID: "891a9e82-f3d1-42d5-81c3-9d397421322a"). InnerVolumeSpecName "kube-api-access-h7s7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.018061 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891a9e82-f3d1-42d5-81c3-9d397421322a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.018089 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7s7b\" (UniqueName: \"kubernetes.io/projected/891a9e82-f3d1-42d5-81c3-9d397421322a-kube-api-access-h7s7b\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.051186 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.068728 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.077898 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.118732 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5qd4\" (UniqueName: \"kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4\") pod \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.118805 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq55v\" (UniqueName: \"kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v\") pod \"c811b170-ced1-481b-adf7-9167094df800\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.118868 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts\") pod \"c811b170-ced1-481b-adf7-9167094df800\" (UID: \"c811b170-ced1-481b-adf7-9167094df800\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.118908 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts\") pod \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\" (UID: \"31de30bf-1b49-4768-8f78-7c3812d1f6f9\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.118928 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts\") pod \"6510fbc8-65b5-4783-8992-66f4b0899cef\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.119029 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz7nr\" (UniqueName: \"kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr\") pod \"6510fbc8-65b5-4783-8992-66f4b0899cef\" (UID: \"6510fbc8-65b5-4783-8992-66f4b0899cef\") " Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.120238 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c811b170-ced1-481b-adf7-9167094df800" (UID: "c811b170-ced1-481b-adf7-9167094df800"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.120789 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31de30bf-1b49-4768-8f78-7c3812d1f6f9" (UID: "31de30bf-1b49-4768-8f78-7c3812d1f6f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.121312 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6510fbc8-65b5-4783-8992-66f4b0899cef" (UID: "6510fbc8-65b5-4783-8992-66f4b0899cef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.137457 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr" (OuterVolumeSpecName: "kube-api-access-kz7nr") pod "6510fbc8-65b5-4783-8992-66f4b0899cef" (UID: "6510fbc8-65b5-4783-8992-66f4b0899cef"). InnerVolumeSpecName "kube-api-access-kz7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.137994 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4" (OuterVolumeSpecName: "kube-api-access-j5qd4") pod "31de30bf-1b49-4768-8f78-7c3812d1f6f9" (UID: "31de30bf-1b49-4768-8f78-7c3812d1f6f9"). InnerVolumeSpecName "kube-api-access-j5qd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.141199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v" (OuterVolumeSpecName: "kube-api-access-hq55v") pod "c811b170-ced1-481b-adf7-9167094df800" (UID: "c811b170-ced1-481b-adf7-9167094df800"). InnerVolumeSpecName "kube-api-access-hq55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221397 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5qd4\" (UniqueName: \"kubernetes.io/projected/31de30bf-1b49-4768-8f78-7c3812d1f6f9-kube-api-access-j5qd4\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221437 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq55v\" (UniqueName: \"kubernetes.io/projected/c811b170-ced1-481b-adf7-9167094df800-kube-api-access-hq55v\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221450 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c811b170-ced1-481b-adf7-9167094df800-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221462 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31de30bf-1b49-4768-8f78-7c3812d1f6f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221476 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6510fbc8-65b5-4783-8992-66f4b0899cef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.221486 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz7nr\" (UniqueName: \"kubernetes.io/projected/6510fbc8-65b5-4783-8992-66f4b0899cef-kube-api-access-kz7nr\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.459942 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b376-account-create-update-tqdsd" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.460372 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b376-account-create-update-tqdsd" event={"ID":"c811b170-ced1-481b-adf7-9167094df800","Type":"ContainerDied","Data":"e946753923ab08eab113f548abb2099066f3a90b4e5f67aa6ef9ed5b97b0a804"} Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.460478 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e946753923ab08eab113f548abb2099066f3a90b4e5f67aa6ef9ed5b97b0a804" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.465799 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r5mx" event={"ID":"31de30bf-1b49-4768-8f78-7c3812d1f6f9","Type":"ContainerDied","Data":"9450e486b016763a288240a7977c0f2db83db42f8580ca6c09e4a306773e58a1"} Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.465869 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9450e486b016763a288240a7977c0f2db83db42f8580ca6c09e4a306773e58a1" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.466107 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r5mx" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.467840 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.467841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f56d-account-create-update-74r6s" event={"ID":"891a9e82-f3d1-42d5-81c3-9d397421322a","Type":"ContainerDied","Data":"ba2d6ba59e0fa669a88983b796a5fc7d22fd8b4e733ed2c593d45a333b34d627"} Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.468114 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba2d6ba59e0fa669a88983b796a5fc7d22fd8b4e733ed2c593d45a333b34d627" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.473665 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" event={"ID":"6510fbc8-65b5-4783-8992-66f4b0899cef","Type":"ContainerDied","Data":"ce795e06456634a9394843df7a3343ab72e7872e1e2e3cd37bb9be34f991cf90"} Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.473708 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce795e06456634a9394843df7a3343ab72e7872e1e2e3cd37bb9be34f991cf90" Jan 26 09:26:21 crc kubenswrapper[4827]: I0126 09:26:21.473780 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7bf0-account-create-update-qxgg2" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.189571 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2qr8h"] Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190017 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b6bbd8-6b37-497c-a3e4-e576808ad689" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190037 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b6bbd8-6b37-497c-a3e4-e576808ad689" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190061 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6510fbc8-65b5-4783-8992-66f4b0899cef" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190070 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6510fbc8-65b5-4783-8992-66f4b0899cef" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190109 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31de30bf-1b49-4768-8f78-7c3812d1f6f9" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190118 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="31de30bf-1b49-4768-8f78-7c3812d1f6f9" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190129 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869917b6-85c2-45fe-8bb2-1bb1bebed474" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190137 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="869917b6-85c2-45fe-8bb2-1bb1bebed474" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190152 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891a9e82-f3d1-42d5-81c3-9d397421322a" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190161 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="891a9e82-f3d1-42d5-81c3-9d397421322a" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: E0126 09:26:22.190172 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c811b170-ced1-481b-adf7-9167094df800" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190180 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c811b170-ced1-481b-adf7-9167094df800" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190377 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="869917b6-85c2-45fe-8bb2-1bb1bebed474" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190397 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="31de30bf-1b49-4768-8f78-7c3812d1f6f9" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190409 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c811b170-ced1-481b-adf7-9167094df800" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190428 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b6bbd8-6b37-497c-a3e4-e576808ad689" containerName="mariadb-database-create" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190442 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="891a9e82-f3d1-42d5-81c3-9d397421322a" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.190455 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6510fbc8-65b5-4783-8992-66f4b0899cef" containerName="mariadb-account-create-update" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.191140 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.195199 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.195658 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vzzrt" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.195914 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.214068 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2qr8h"] Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.237004 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.237085 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.237150 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.237241 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.338720 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.338815 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.338870 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.338909 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.342815 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.343329 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.345069 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.360317 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn\") pod \"nova-cell0-conductor-db-sync-2qr8h\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.507304 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:22 crc kubenswrapper[4827]: I0126 09:26:22.975882 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2qr8h"] Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.377868 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469572 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469698 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469908 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469928 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469947 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.469971 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcjdd\" (UniqueName: \"kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd\") pod \"919440e6-28a7-48f4-94f1-4ff72b27325e\" (UID: \"919440e6-28a7-48f4-94f1-4ff72b27325e\") " Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.470251 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.472091 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.475705 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts" (OuterVolumeSpecName: "scripts") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.481590 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd" (OuterVolumeSpecName: "kube-api-access-gcjdd") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "kube-api-access-gcjdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.507309 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.512706 4827 generic.go:334] "Generic (PLEG): container finished" podID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerID="41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176" exitCode=0 Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.512823 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerDied","Data":"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176"} Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.512854 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"919440e6-28a7-48f4-94f1-4ff72b27325e","Type":"ContainerDied","Data":"baf7eaec2c51c211baf4fad89bbd986496775232166b6f9c4eac09ab9e914a03"} Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.512872 4827 scope.go:117] "RemoveContainer" containerID="024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.512982 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.519575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" event={"ID":"441be7a9-ebbe-420e-896e-f28eb3cdbe16","Type":"ContainerStarted","Data":"eac5a1379e40f3182df3f97cfcef4c102e5dd6b206eaa9206d26d2c8cc228bf8"} Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.542528 4827 scope.go:117] "RemoveContainer" containerID="c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.572508 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.572536 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.572545 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.572554 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/919440e6-28a7-48f4-94f1-4ff72b27325e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.572562 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcjdd\" (UniqueName: \"kubernetes.io/projected/919440e6-28a7-48f4-94f1-4ff72b27325e-kube-api-access-gcjdd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.588915 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.633899 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.640795 4827 scope.go:117] "RemoveContainer" containerID="cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.660902 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data" (OuterVolumeSpecName: "config-data") pod "919440e6-28a7-48f4-94f1-4ff72b27325e" (UID: "919440e6-28a7-48f4-94f1-4ff72b27325e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.674189 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.674227 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.674239 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/919440e6-28a7-48f4-94f1-4ff72b27325e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.688825 4827 scope.go:117] "RemoveContainer" containerID="41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.769010 4827 scope.go:117] "RemoveContainer" containerID="024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.775015 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf\": container with ID starting with 024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf not found: ID does not exist" containerID="024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.775064 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf"} err="failed to get container status \"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf\": rpc error: code = NotFound desc = could not find container \"024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf\": container with ID starting with 024aa2167749910296ac05fde85fd5c5e267351fbb12b08b9586762a6991d6bf not found: ID does not exist" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.775092 4827 scope.go:117] "RemoveContainer" containerID="c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.776076 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54\": container with ID starting with c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54 not found: ID does not exist" containerID="c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.776097 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54"} err="failed to get container status \"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54\": rpc error: code = NotFound desc = could not find container \"c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54\": container with ID starting with c5a16682fe2141a6eff7639e9b72f1e6bab7630b5fc0fde38d6ddbf902579b54 not found: ID does not exist" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.776111 4827 scope.go:117] "RemoveContainer" containerID="cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.777183 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3\": container with ID starting with cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3 not found: ID does not exist" containerID="cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.777201 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3"} err="failed to get container status \"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3\": rpc error: code = NotFound desc = could not find container \"cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3\": container with ID starting with cea8497186561eab87c7bddbfeb28f249c0b4c6b232f2bf397daf6b42e7adbd3 not found: ID does not exist" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.777216 4827 scope.go:117] "RemoveContainer" containerID="41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.777506 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176\": container with ID starting with 41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176 not found: ID does not exist" containerID="41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.777526 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176"} err="failed to get container status \"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176\": rpc error: code = NotFound desc = could not find container \"41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176\": container with ID starting with 41e49109ca083fb0def96e617429d85992965a382aba22c5f7ca6b7635ff0176 not found: ID does not exist" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.873776 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.883164 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900075 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.900485 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="sg-core" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900500 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="sg-core" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.900519 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-notification-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900525 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-notification-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.900545 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="proxy-httpd" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900553 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="proxy-httpd" Jan 26 09:26:23 crc kubenswrapper[4827]: E0126 09:26:23.900572 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-central-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900579 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-central-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900777 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="sg-core" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900798 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="proxy-httpd" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900826 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-central-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.900835 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" containerName="ceilometer-notification-agent" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.902649 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.905691 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.908218 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.908398 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.921115 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981690 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981730 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981752 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46r6z\" (UniqueName: \"kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981841 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.981897 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.982040 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:23 crc kubenswrapper[4827]: I0126 09:26:23.982084 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083575 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083687 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083718 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083811 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46r6z\" (UniqueName: \"kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083898 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.083925 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.084402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.084987 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.088887 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.090254 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.097869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.103169 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.104728 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.107782 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46r6z\" (UniqueName: \"kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z\") pod \"ceilometer-0\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.237753 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:24 crc kubenswrapper[4827]: I0126 09:26:24.795686 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:24 crc kubenswrapper[4827]: W0126 09:26:24.803510 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00270bd1_bca2_4b67_9fc8_483a5f968313.slice/crio-f85908afeeba1103c011fbcdbb6c1a716d96f50eaddb55475c4c1d91356e5e4e WatchSource:0}: Error finding container f85908afeeba1103c011fbcdbb6c1a716d96f50eaddb55475c4c1d91356e5e4e: Status 404 returned error can't find the container with id f85908afeeba1103c011fbcdbb6c1a716d96f50eaddb55475c4c1d91356e5e4e Jan 26 09:26:25 crc kubenswrapper[4827]: I0126 09:26:25.553975 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerStarted","Data":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} Jan 26 09:26:25 crc kubenswrapper[4827]: I0126 09:26:25.554587 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerStarted","Data":"f85908afeeba1103c011fbcdbb6c1a716d96f50eaddb55475c4c1d91356e5e4e"} Jan 26 09:26:25 crc kubenswrapper[4827]: I0126 09:26:25.714879 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919440e6-28a7-48f4-94f1-4ff72b27325e" path="/var/lib/kubelet/pods/919440e6-28a7-48f4-94f1-4ff72b27325e/volumes" Jan 26 09:26:26 crc kubenswrapper[4827]: I0126 09:26:26.564255 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerStarted","Data":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} Jan 26 09:26:31 crc kubenswrapper[4827]: I0126 09:26:31.609119 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" event={"ID":"441be7a9-ebbe-420e-896e-f28eb3cdbe16","Type":"ContainerStarted","Data":"6a40712e7a28a86e4dcfbd88a9e0480982fe67b0e37dc968f4300acaa0701ca3"} Jan 26 09:26:31 crc kubenswrapper[4827]: I0126 09:26:31.610886 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerStarted","Data":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} Jan 26 09:26:31 crc kubenswrapper[4827]: I0126 09:26:31.631338 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" podStartSLOduration=1.4498472009999999 podStartE2EDuration="9.631321753s" podCreationTimestamp="2026-01-26 09:26:22 +0000 UTC" firstStartedPulling="2026-01-26 09:26:22.98524742 +0000 UTC m=+1211.633919239" lastFinishedPulling="2026-01-26 09:26:31.166721972 +0000 UTC m=+1219.815393791" observedRunningTime="2026-01-26 09:26:31.623473082 +0000 UTC m=+1220.272144901" watchObservedRunningTime="2026-01-26 09:26:31.631321753 +0000 UTC m=+1220.279993572" Jan 26 09:26:33 crc kubenswrapper[4827]: I0126 09:26:33.628478 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerStarted","Data":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} Jan 26 09:26:33 crc kubenswrapper[4827]: I0126 09:26:33.630113 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:26:33 crc kubenswrapper[4827]: I0126 09:26:33.654662 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.710458774 podStartE2EDuration="10.65462982s" podCreationTimestamp="2026-01-26 09:26:23 +0000 UTC" firstStartedPulling="2026-01-26 09:26:24.806482088 +0000 UTC m=+1213.455153907" lastFinishedPulling="2026-01-26 09:26:32.750653134 +0000 UTC m=+1221.399324953" observedRunningTime="2026-01-26 09:26:33.648472995 +0000 UTC m=+1222.297144814" watchObservedRunningTime="2026-01-26 09:26:33.65462982 +0000 UTC m=+1222.303301639" Jan 26 09:26:34 crc kubenswrapper[4827]: I0126 09:26:34.864015 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:36 crc kubenswrapper[4827]: I0126 09:26:36.657214 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-central-agent" containerID="cri-o://fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" gracePeriod=30 Jan 26 09:26:36 crc kubenswrapper[4827]: I0126 09:26:36.657981 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="proxy-httpd" containerID="cri-o://27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" gracePeriod=30 Jan 26 09:26:36 crc kubenswrapper[4827]: I0126 09:26:36.658048 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="sg-core" containerID="cri-o://2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" gracePeriod=30 Jan 26 09:26:36 crc kubenswrapper[4827]: I0126 09:26:36.658087 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-notification-agent" containerID="cri-o://2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" gracePeriod=30 Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.571575 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638304 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638392 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638459 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46r6z\" (UniqueName: \"kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638489 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638523 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638593 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638614 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.638665 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml\") pod \"00270bd1-bca2-4b67-9fc8-483a5f968313\" (UID: \"00270bd1-bca2-4b67-9fc8-483a5f968313\") " Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.641282 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.641676 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.671785 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts" (OuterVolumeSpecName: "scripts") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.687922 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z" (OuterVolumeSpecName: "kube-api-access-46r6z") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "kube-api-access-46r6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.720181 4827 generic.go:334] "Generic (PLEG): container finished" podID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" exitCode=0 Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.720237 4827 generic.go:334] "Generic (PLEG): container finished" podID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" exitCode=2 Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.720249 4827 generic.go:334] "Generic (PLEG): container finished" podID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" exitCode=0 Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.720257 4827 generic.go:334] "Generic (PLEG): container finished" podID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" exitCode=0 Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.720395 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726424 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerDied","Data":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726472 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerDied","Data":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726485 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerDied","Data":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerDied","Data":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00270bd1-bca2-4b67-9fc8-483a5f968313","Type":"ContainerDied","Data":"f85908afeeba1103c011fbcdbb6c1a716d96f50eaddb55475c4c1d91356e5e4e"} Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.726526 4827 scope.go:117] "RemoveContainer" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.741114 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.741460 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46r6z\" (UniqueName: \"kubernetes.io/projected/00270bd1-bca2-4b67-9fc8-483a5f968313-kube-api-access-46r6z\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.741472 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00270bd1-bca2-4b67-9fc8-483a5f968313-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.741485 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.752352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.787831 4827 scope.go:117] "RemoveContainer" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.845826 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.847196 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.866885 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data" (OuterVolumeSpecName: "config-data") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.874813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00270bd1-bca2-4b67-9fc8-483a5f968313" (UID: "00270bd1-bca2-4b67-9fc8-483a5f968313"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.935592 4827 scope.go:117] "RemoveContainer" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.946951 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.946981 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.946991 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00270bd1-bca2-4b67-9fc8-483a5f968313-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.954073 4827 scope.go:117] "RemoveContainer" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.971467 4827 scope.go:117] "RemoveContainer" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: E0126 09:26:37.971889 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": container with ID starting with 27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8 not found: ID does not exist" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.971927 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} err="failed to get container status \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": rpc error: code = NotFound desc = could not find container \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": container with ID starting with 27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.971952 4827 scope.go:117] "RemoveContainer" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: E0126 09:26:37.972240 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": container with ID starting with 2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83 not found: ID does not exist" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.972276 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} err="failed to get container status \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": rpc error: code = NotFound desc = could not find container \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": container with ID starting with 2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.972304 4827 scope.go:117] "RemoveContainer" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: E0126 09:26:37.972685 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": container with ID starting with 2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7 not found: ID does not exist" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.972717 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} err="failed to get container status \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": rpc error: code = NotFound desc = could not find container \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": container with ID starting with 2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.972735 4827 scope.go:117] "RemoveContainer" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: E0126 09:26:37.973002 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": container with ID starting with fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0 not found: ID does not exist" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973034 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} err="failed to get container status \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": rpc error: code = NotFound desc = could not find container \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": container with ID starting with fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973052 4827 scope.go:117] "RemoveContainer" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973291 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} err="failed to get container status \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": rpc error: code = NotFound desc = could not find container \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": container with ID starting with 27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973309 4827 scope.go:117] "RemoveContainer" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973602 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} err="failed to get container status \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": rpc error: code = NotFound desc = could not find container \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": container with ID starting with 2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973628 4827 scope.go:117] "RemoveContainer" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973920 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} err="failed to get container status \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": rpc error: code = NotFound desc = could not find container \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": container with ID starting with 2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.973942 4827 scope.go:117] "RemoveContainer" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974240 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} err="failed to get container status \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": rpc error: code = NotFound desc = could not find container \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": container with ID starting with fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974279 4827 scope.go:117] "RemoveContainer" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974529 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} err="failed to get container status \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": rpc error: code = NotFound desc = could not find container \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": container with ID starting with 27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974557 4827 scope.go:117] "RemoveContainer" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974836 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} err="failed to get container status \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": rpc error: code = NotFound desc = could not find container \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": container with ID starting with 2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.974860 4827 scope.go:117] "RemoveContainer" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975105 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} err="failed to get container status \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": rpc error: code = NotFound desc = could not find container \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": container with ID starting with 2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975130 4827 scope.go:117] "RemoveContainer" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975375 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} err="failed to get container status \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": rpc error: code = NotFound desc = could not find container \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": container with ID starting with fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975402 4827 scope.go:117] "RemoveContainer" containerID="27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975631 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8"} err="failed to get container status \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": rpc error: code = NotFound desc = could not find container \"27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8\": container with ID starting with 27532f7b7f8454ab3b21ad653618c06aa1667870f3bb0bffd622603792ea82a8 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975676 4827 scope.go:117] "RemoveContainer" containerID="2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975895 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83"} err="failed to get container status \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": rpc error: code = NotFound desc = could not find container \"2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83\": container with ID starting with 2620a460d360df7f2323fc5f13d334c3153fc338f736c3918a5ffaa127614c83 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.975921 4827 scope.go:117] "RemoveContainer" containerID="2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.976115 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7"} err="failed to get container status \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": rpc error: code = NotFound desc = could not find container \"2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7\": container with ID starting with 2c7e9014bb0d0943c7c6d7214f9c29149c927005a2e273f0e42ba7b9be67afd7 not found: ID does not exist" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.976139 4827 scope.go:117] "RemoveContainer" containerID="fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0" Jan 26 09:26:37 crc kubenswrapper[4827]: I0126 09:26:37.976338 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0"} err="failed to get container status \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": rpc error: code = NotFound desc = could not find container \"fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0\": container with ID starting with fd918f62b3d7fc5549f8d74d795b8a6c8ab147770059d0d28a47c5a5581a29a0 not found: ID does not exist" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.061143 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.073186 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.088724 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:38 crc kubenswrapper[4827]: E0126 09:26:38.089067 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-central-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089085 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-central-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: E0126 09:26:38.089101 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="proxy-httpd" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089108 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="proxy-httpd" Jan 26 09:26:38 crc kubenswrapper[4827]: E0126 09:26:38.089118 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="sg-core" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089124 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="sg-core" Jan 26 09:26:38 crc kubenswrapper[4827]: E0126 09:26:38.089137 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-notification-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089143 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-notification-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089294 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="proxy-httpd" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089306 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-central-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089336 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="ceilometer-notification-agent" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.089345 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" containerName="sg-core" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.090950 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.094702 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.095086 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.106252 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.109806 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.149664 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.149736 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.149835 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nj9\" (UniqueName: \"kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.149960 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.150036 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.150121 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.150168 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.150234 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252256 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252325 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nj9\" (UniqueName: \"kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252419 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252468 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252545 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252578 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252617 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.252836 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.253290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.255889 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.256701 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.258062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.258804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.259279 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.272654 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nj9\" (UniqueName: \"kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9\") pod \"ceilometer-0\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.429017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:26:38 crc kubenswrapper[4827]: I0126 09:26:38.882650 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:26:39 crc kubenswrapper[4827]: I0126 09:26:39.714758 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00270bd1-bca2-4b67-9fc8-483a5f968313" path="/var/lib/kubelet/pods/00270bd1-bca2-4b67-9fc8-483a5f968313/volumes" Jan 26 09:26:39 crc kubenswrapper[4827]: I0126 09:26:39.739491 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerStarted","Data":"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d"} Jan 26 09:26:39 crc kubenswrapper[4827]: I0126 09:26:39.739540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerStarted","Data":"15fe8c6db2e22671411d59b02082f4f8cdf305ac8086ea6e1b4745b2ff4e9b94"} Jan 26 09:26:40 crc kubenswrapper[4827]: I0126 09:26:40.751152 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerStarted","Data":"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1"} Jan 26 09:26:40 crc kubenswrapper[4827]: I0126 09:26:40.751692 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerStarted","Data":"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba"} Jan 26 09:26:42 crc kubenswrapper[4827]: I0126 09:26:42.962036 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerStarted","Data":"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b"} Jan 26 09:26:42 crc kubenswrapper[4827]: I0126 09:26:42.962806 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:26:42 crc kubenswrapper[4827]: I0126 09:26:42.985598 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.234765093 podStartE2EDuration="4.985572029s" podCreationTimestamp="2026-01-26 09:26:38 +0000 UTC" firstStartedPulling="2026-01-26 09:26:38.890389605 +0000 UTC m=+1227.539061424" lastFinishedPulling="2026-01-26 09:26:41.641196541 +0000 UTC m=+1230.289868360" observedRunningTime="2026-01-26 09:26:42.98375869 +0000 UTC m=+1231.632430529" watchObservedRunningTime="2026-01-26 09:26:42.985572029 +0000 UTC m=+1231.634243858" Jan 26 09:26:43 crc kubenswrapper[4827]: I0126 09:26:43.972741 4827 generic.go:334] "Generic (PLEG): container finished" podID="441be7a9-ebbe-420e-896e-f28eb3cdbe16" containerID="6a40712e7a28a86e4dcfbd88a9e0480982fe67b0e37dc968f4300acaa0701ca3" exitCode=0 Jan 26 09:26:43 crc kubenswrapper[4827]: I0126 09:26:43.972833 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" event={"ID":"441be7a9-ebbe-420e-896e-f28eb3cdbe16","Type":"ContainerDied","Data":"6a40712e7a28a86e4dcfbd88a9e0480982fe67b0e37dc968f4300acaa0701ca3"} Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.301678 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.472915 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle\") pod \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.472959 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn\") pod \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.473073 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data\") pod \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.473113 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts\") pod \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\" (UID: \"441be7a9-ebbe-420e-896e-f28eb3cdbe16\") " Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.485772 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts" (OuterVolumeSpecName: "scripts") pod "441be7a9-ebbe-420e-896e-f28eb3cdbe16" (UID: "441be7a9-ebbe-420e-896e-f28eb3cdbe16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.485805 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn" (OuterVolumeSpecName: "kube-api-access-8v4gn") pod "441be7a9-ebbe-420e-896e-f28eb3cdbe16" (UID: "441be7a9-ebbe-420e-896e-f28eb3cdbe16"). InnerVolumeSpecName "kube-api-access-8v4gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.500923 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "441be7a9-ebbe-420e-896e-f28eb3cdbe16" (UID: "441be7a9-ebbe-420e-896e-f28eb3cdbe16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.502955 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data" (OuterVolumeSpecName: "config-data") pod "441be7a9-ebbe-420e-896e-f28eb3cdbe16" (UID: "441be7a9-ebbe-420e-896e-f28eb3cdbe16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.574877 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.574915 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/441be7a9-ebbe-420e-896e-f28eb3cdbe16-kube-api-access-8v4gn\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.574928 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.574937 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441be7a9-ebbe-420e-896e-f28eb3cdbe16-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.990817 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" event={"ID":"441be7a9-ebbe-420e-896e-f28eb3cdbe16","Type":"ContainerDied","Data":"eac5a1379e40f3182df3f97cfcef4c102e5dd6b206eaa9206d26d2c8cc228bf8"} Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.990859 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac5a1379e40f3182df3f97cfcef4c102e5dd6b206eaa9206d26d2c8cc228bf8" Jan 26 09:26:45 crc kubenswrapper[4827]: I0126 09:26:45.990905 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2qr8h" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.104411 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 09:26:46 crc kubenswrapper[4827]: E0126 09:26:46.104885 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441be7a9-ebbe-420e-896e-f28eb3cdbe16" containerName="nova-cell0-conductor-db-sync" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.104904 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="441be7a9-ebbe-420e-896e-f28eb3cdbe16" containerName="nova-cell0-conductor-db-sync" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.105131 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="441be7a9-ebbe-420e-896e-f28eb3cdbe16" containerName="nova-cell0-conductor-db-sync" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.105759 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.109788 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vzzrt" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.110111 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.113328 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.287653 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbc6j\" (UniqueName: \"kubernetes.io/projected/ef7d553f-5037-4ed5-9d99-c278f206381e-kube-api-access-fbc6j\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.287987 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.288106 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.389959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbc6j\" (UniqueName: \"kubernetes.io/projected/ef7d553f-5037-4ed5-9d99-c278f206381e-kube-api-access-fbc6j\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.390039 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.390092 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.396406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.407599 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbc6j\" (UniqueName: \"kubernetes.io/projected/ef7d553f-5037-4ed5-9d99-c278f206381e-kube-api-access-fbc6j\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.407821 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7d553f-5037-4ed5-9d99-c278f206381e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef7d553f-5037-4ed5-9d99-c278f206381e\") " pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.475056 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:46 crc kubenswrapper[4827]: I0126 09:26:46.904600 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 09:26:47 crc kubenswrapper[4827]: I0126 09:26:46.999393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef7d553f-5037-4ed5-9d99-c278f206381e","Type":"ContainerStarted","Data":"ff243e19e3aa5e905110cea654c4c87277214da067a61f6ed8ee6fdbe89f11a4"} Jan 26 09:26:48 crc kubenswrapper[4827]: I0126 09:26:48.009198 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef7d553f-5037-4ed5-9d99-c278f206381e","Type":"ContainerStarted","Data":"34f0baa4d66605c52b101a985e5b41060c5f3444c4b0701bdbd9878eda592574"} Jan 26 09:26:48 crc kubenswrapper[4827]: I0126 09:26:48.010664 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:48 crc kubenswrapper[4827]: I0126 09:26:48.044012 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.043987056 podStartE2EDuration="2.043987056s" podCreationTimestamp="2026-01-26 09:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:26:48.041759006 +0000 UTC m=+1236.690430835" watchObservedRunningTime="2026-01-26 09:26:48.043987056 +0000 UTC m=+1236.692658885" Jan 26 09:26:56 crc kubenswrapper[4827]: I0126 09:26:56.506619 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 09:26:56 crc kubenswrapper[4827]: I0126 09:26:56.990091 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6sqfn"] Jan 26 09:26:56 crc kubenswrapper[4827]: I0126 09:26:56.992222 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:56 crc kubenswrapper[4827]: I0126 09:26:56.996173 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 09:26:56 crc kubenswrapper[4827]: I0126 09:26:56.996368 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.004529 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6sqfn"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.089091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxn6\" (UniqueName: \"kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.089138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.089201 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.089243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.192592 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxn6\" (UniqueName: \"kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.192680 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.192739 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.192778 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.192609 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.195030 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.201482 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.206365 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.207365 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.214387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.214702 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.219599 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.243015 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.248203 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxn6\" (UniqueName: \"kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.255872 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6sqfn\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.300943 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301002 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301066 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301097 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301123 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301165 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wr6\" (UniqueName: \"kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.301182 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2prb\" (UniqueName: \"kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.325391 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.333844 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.413849 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wr6\" (UniqueName: \"kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.413887 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2prb\" (UniqueName: \"kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.413913 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.413946 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.414001 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.414030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.414056 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.421114 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.427833 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.429150 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.432062 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.436996 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.438945 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.445391 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.446366 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.482257 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2prb\" (UniqueName: \"kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb\") pod \"nova-api-0\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.483783 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.502575 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wr6\" (UniqueName: \"kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.522183 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.522244 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.522285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb97p\" (UniqueName: \"kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.522394 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.525570 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.526671 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.531116 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.610213 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.630656 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631494 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631581 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631620 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb97p\" (UniqueName: \"kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631659 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjgw\" (UniqueName: \"kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.631764 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.644848 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.648660 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.656809 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.657258 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.679290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb97p\" (UniqueName: \"kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p\") pod \"nova-metadata-0\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.734122 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.736981 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.737156 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjgw\" (UniqueName: \"kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.742968 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.749092 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.753053 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.754408 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.797461 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjgw\" (UniqueName: \"kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw\") pod \"nova-scheduler-0\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.812122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.828001 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.849108 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.849177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d6hw\" (UniqueName: \"kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.849222 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.849245 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.849318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.851073 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.954528 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.954589 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d6hw\" (UniqueName: \"kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.954624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.954659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.954700 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.955571 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.955867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.956103 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.956523 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:57 crc kubenswrapper[4827]: I0126 09:26:57.985619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d6hw\" (UniqueName: \"kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw\") pod \"dnsmasq-dns-59fd54bbff-nxk49\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.106506 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.256100 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6sqfn"] Jan 26 09:26:58 crc kubenswrapper[4827]: W0126 09:26:58.295474 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3793491_d9a6_4c4a_ad5b_00818693d5fc.slice/crio-e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2 WatchSource:0}: Error finding container e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2: Status 404 returned error can't find the container with id e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2 Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.455965 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.640611 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:26:58 crc kubenswrapper[4827]: W0126 09:26:58.640979 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc0e008c_de8c_48df_a639_dae1a92166f1.slice/crio-8f46af1adfc52143ffcc9da80910a440eb73f8b7378ca06d9dacd5f37d136113 WatchSource:0}: Error finding container 8f46af1adfc52143ffcc9da80910a440eb73f8b7378ca06d9dacd5f37d136113: Status 404 returned error can't find the container with id 8f46af1adfc52143ffcc9da80910a440eb73f8b7378ca06d9dacd5f37d136113 Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.650391 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.772982 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wj8g6"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.774115 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.776380 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.777019 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.800721 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.837676 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wj8g6"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.884793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.885103 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59lm\" (UniqueName: \"kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.885289 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.885473 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.905307 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.987613 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.987987 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.988049 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59lm\" (UniqueName: \"kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.988126 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.992334 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.993276 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:58 crc kubenswrapper[4827]: I0126 09:26:58.993624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.010995 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59lm\" (UniqueName: \"kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm\") pod \"nova-cell1-conductor-db-sync-wj8g6\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.090210 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.116612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7012d90e-6e98-4755-a4c8-0711f4167fb9","Type":"ContainerStarted","Data":"9385d09f69b28f2b80e0fa3e55bade7f471ce14347a6bd1a3151e378157f3341"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.117833 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerStarted","Data":"8f46af1adfc52143ffcc9da80910a440eb73f8b7378ca06d9dacd5f37d136113"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.120046 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aabf68e1-11ae-46df-b2b5-d8ca6005f427","Type":"ContainerStarted","Data":"603e2dc81668c80e41d5ee1195b9e1e7d978fbcbeebff8252c9113894d87f69e"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.122674 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerStarted","Data":"e74b6d2c347eae02f2e5bc470b3d40897bb4c62e9a68f436fb7609ab7e87112e"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.122711 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerStarted","Data":"68010e2c55b8c82641a98f1491db24d5c6dd398e3b214f87c43cb7bc2eb4c1a8"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.135222 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6sqfn" event={"ID":"c3793491-d9a6-4c4a-ad5b-00818693d5fc","Type":"ContainerStarted","Data":"21010fa8fb743688fcddf0dc21ba0f9179aff798e62e9dd41168e4396dfa9f16"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.135271 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6sqfn" event={"ID":"c3793491-d9a6-4c4a-ad5b-00818693d5fc","Type":"ContainerStarted","Data":"e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.163858 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerStarted","Data":"efa24ea516c4a109785b4b3da8eb9880d1bad6a1827679460e24c07ce50ec8e3"} Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.177530 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6sqfn" podStartSLOduration=3.177319756 podStartE2EDuration="3.177319756s" podCreationTimestamp="2026-01-26 09:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:26:59.166809704 +0000 UTC m=+1247.815481523" watchObservedRunningTime="2026-01-26 09:26:59.177319756 +0000 UTC m=+1247.825991585" Jan 26 09:26:59 crc kubenswrapper[4827]: I0126 09:26:59.619194 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wj8g6"] Jan 26 09:26:59 crc kubenswrapper[4827]: W0126 09:26:59.636730 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4762a9bd_9e83_4616_a70b_3c53f1d4147c.slice/crio-1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839 WatchSource:0}: Error finding container 1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839: Status 404 returned error can't find the container with id 1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839 Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.181626 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" event={"ID":"4762a9bd-9e83-4616-a70b-3c53f1d4147c","Type":"ContainerStarted","Data":"67571f445348db8200d8412b7b40a1dc5be9649cec2a4e038740b7e409335df8"} Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.182443 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" event={"ID":"4762a9bd-9e83-4616-a70b-3c53f1d4147c","Type":"ContainerStarted","Data":"1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839"} Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.187240 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerID="e74b6d2c347eae02f2e5bc470b3d40897bb4c62e9a68f436fb7609ab7e87112e" exitCode=0 Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.187322 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerDied","Data":"e74b6d2c347eae02f2e5bc470b3d40897bb4c62e9a68f436fb7609ab7e87112e"} Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.187347 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerStarted","Data":"87bd9c50a25fcebb45a76249b643413fa576917391bb4420549f676614c42251"} Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.187422 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.209995 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" podStartSLOduration=2.209909634 podStartE2EDuration="2.209909634s" podCreationTimestamp="2026-01-26 09:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:00.197751589 +0000 UTC m=+1248.846423408" watchObservedRunningTime="2026-01-26 09:27:00.209909634 +0000 UTC m=+1248.858581453" Jan 26 09:27:00 crc kubenswrapper[4827]: I0126 09:27:00.219764 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" podStartSLOduration=3.219742868 podStartE2EDuration="3.219742868s" podCreationTimestamp="2026-01-26 09:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:00.218876015 +0000 UTC m=+1248.867547834" watchObservedRunningTime="2026-01-26 09:27:00.219742868 +0000 UTC m=+1248.868414687" Jan 26 09:27:01 crc kubenswrapper[4827]: I0126 09:27:01.176483 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:01 crc kubenswrapper[4827]: I0126 09:27:01.191505 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.260106 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerStarted","Data":"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.260686 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerStarted","Data":"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.265491 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7012d90e-6e98-4755-a4c8-0711f4167fb9","Type":"ContainerStarted","Data":"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.265560 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7012d90e-6e98-4755-a4c8-0711f4167fb9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651" gracePeriod=30 Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.270009 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerStarted","Data":"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.270193 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerStarted","Data":"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.270120 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-metadata" containerID="cri-o://c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" gracePeriod=30 Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.270087 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-log" containerID="cri-o://917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" gracePeriod=30 Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.273319 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aabf68e1-11ae-46df-b2b5-d8ca6005f427","Type":"ContainerStarted","Data":"17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea"} Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.315104 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.978407776 podStartE2EDuration="7.315083547s" podCreationTimestamp="2026-01-26 09:26:57 +0000 UTC" firstStartedPulling="2026-01-26 09:26:58.645175447 +0000 UTC m=+1247.293847266" lastFinishedPulling="2026-01-26 09:27:02.981851218 +0000 UTC m=+1251.630523037" observedRunningTime="2026-01-26 09:27:04.309899408 +0000 UTC m=+1252.958571237" watchObservedRunningTime="2026-01-26 09:27:04.315083547 +0000 UTC m=+1252.963755366" Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.315989 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.817920468 podStartE2EDuration="7.315979591s" podCreationTimestamp="2026-01-26 09:26:57 +0000 UTC" firstStartedPulling="2026-01-26 09:26:58.483842886 +0000 UTC m=+1247.132514705" lastFinishedPulling="2026-01-26 09:27:02.981902009 +0000 UTC m=+1251.630573828" observedRunningTime="2026-01-26 09:27:04.287498948 +0000 UTC m=+1252.936170787" watchObservedRunningTime="2026-01-26 09:27:04.315979591 +0000 UTC m=+1252.964651410" Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.345921 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.017099992 podStartE2EDuration="7.345903532s" podCreationTimestamp="2026-01-26 09:26:57 +0000 UTC" firstStartedPulling="2026-01-26 09:26:58.651421094 +0000 UTC m=+1247.300092913" lastFinishedPulling="2026-01-26 09:27:02.980224634 +0000 UTC m=+1251.628896453" observedRunningTime="2026-01-26 09:27:04.344147985 +0000 UTC m=+1252.992819804" watchObservedRunningTime="2026-01-26 09:27:04.345903532 +0000 UTC m=+1252.994575351" Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.373462 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.222468581 podStartE2EDuration="7.37344692s" podCreationTimestamp="2026-01-26 09:26:57 +0000 UTC" firstStartedPulling="2026-01-26 09:26:58.832802821 +0000 UTC m=+1247.481474640" lastFinishedPulling="2026-01-26 09:27:02.98378114 +0000 UTC m=+1251.632452979" observedRunningTime="2026-01-26 09:27:04.372954186 +0000 UTC m=+1253.021625995" watchObservedRunningTime="2026-01-26 09:27:04.37344692 +0000 UTC m=+1253.022118739" Jan 26 09:27:04 crc kubenswrapper[4827]: I0126 09:27:04.896147 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.020554 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle\") pod \"dc0e008c-de8c-48df-a639-dae1a92166f1\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.020663 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data\") pod \"dc0e008c-de8c-48df-a639-dae1a92166f1\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.020708 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb97p\" (UniqueName: \"kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p\") pod \"dc0e008c-de8c-48df-a639-dae1a92166f1\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.020918 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs\") pod \"dc0e008c-de8c-48df-a639-dae1a92166f1\" (UID: \"dc0e008c-de8c-48df-a639-dae1a92166f1\") " Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.021402 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs" (OuterVolumeSpecName: "logs") pod "dc0e008c-de8c-48df-a639-dae1a92166f1" (UID: "dc0e008c-de8c-48df-a639-dae1a92166f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.043522 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p" (OuterVolumeSpecName: "kube-api-access-qb97p") pod "dc0e008c-de8c-48df-a639-dae1a92166f1" (UID: "dc0e008c-de8c-48df-a639-dae1a92166f1"). InnerVolumeSpecName "kube-api-access-qb97p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.051484 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc0e008c-de8c-48df-a639-dae1a92166f1" (UID: "dc0e008c-de8c-48df-a639-dae1a92166f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.096185 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data" (OuterVolumeSpecName: "config-data") pod "dc0e008c-de8c-48df-a639-dae1a92166f1" (UID: "dc0e008c-de8c-48df-a639-dae1a92166f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.123627 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e008c-de8c-48df-a639-dae1a92166f1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.123719 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.123738 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e008c-de8c-48df-a639-dae1a92166f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.123750 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb97p\" (UniqueName: \"kubernetes.io/projected/dc0e008c-de8c-48df-a639-dae1a92166f1-kube-api-access-qb97p\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.285005 4827 generic.go:334] "Generic (PLEG): container finished" podID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerID="c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" exitCode=0 Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.285034 4827 generic.go:334] "Generic (PLEG): container finished" podID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerID="917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" exitCode=143 Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.286074 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.286201 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerDied","Data":"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46"} Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.286246 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerDied","Data":"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0"} Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.286257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc0e008c-de8c-48df-a639-dae1a92166f1","Type":"ContainerDied","Data":"8f46af1adfc52143ffcc9da80910a440eb73f8b7378ca06d9dacd5f37d136113"} Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.286272 4827 scope.go:117] "RemoveContainer" containerID="c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.316488 4827 scope.go:117] "RemoveContainer" containerID="917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.323070 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.336158 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.357584 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:05 crc kubenswrapper[4827]: E0126 09:27:05.358040 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-metadata" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.358062 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-metadata" Jan 26 09:27:05 crc kubenswrapper[4827]: E0126 09:27:05.358093 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-log" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.358101 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-log" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.358262 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-metadata" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.358277 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" containerName="nova-metadata-log" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.359397 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.362699 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.362779 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.373995 4827 scope.go:117] "RemoveContainer" containerID="c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" Jan 26 09:27:05 crc kubenswrapper[4827]: E0126 09:27:05.374364 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46\": container with ID starting with c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46 not found: ID does not exist" containerID="c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.374386 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46"} err="failed to get container status \"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46\": rpc error: code = NotFound desc = could not find container \"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46\": container with ID starting with c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46 not found: ID does not exist" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.374406 4827 scope.go:117] "RemoveContainer" containerID="917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" Jan 26 09:27:05 crc kubenswrapper[4827]: E0126 09:27:05.375117 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0\": container with ID starting with 917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0 not found: ID does not exist" containerID="917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.375153 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0"} err="failed to get container status \"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0\": rpc error: code = NotFound desc = could not find container \"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0\": container with ID starting with 917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0 not found: ID does not exist" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.375178 4827 scope.go:117] "RemoveContainer" containerID="c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.375430 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46"} err="failed to get container status \"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46\": rpc error: code = NotFound desc = could not find container \"c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46\": container with ID starting with c66564a1abebb549c3795e2f5e798739aaf3b2a61a1e37ae2110284d8c3d5d46 not found: ID does not exist" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.375445 4827 scope.go:117] "RemoveContainer" containerID="917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.375616 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0"} err="failed to get container status \"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0\": rpc error: code = NotFound desc = could not find container \"917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0\": container with ID starting with 917a79988f2970130fc83417bc5e0798672942601ac66e32dc6cbcf4950514e0 not found: ID does not exist" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.383418 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.429191 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.429268 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrmp\" (UniqueName: \"kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.429299 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.429380 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.429409 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.531136 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.531441 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.531535 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.531586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrmp\" (UniqueName: \"kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.531625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.532206 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.535432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.536079 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.536487 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.570110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrmp\" (UniqueName: \"kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp\") pod \"nova-metadata-0\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.697889 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:05 crc kubenswrapper[4827]: I0126 09:27:05.719535 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0e008c-de8c-48df-a639-dae1a92166f1" path="/var/lib/kubelet/pods/dc0e008c-de8c-48df-a639-dae1a92166f1/volumes" Jan 26 09:27:06 crc kubenswrapper[4827]: I0126 09:27:06.160147 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:06 crc kubenswrapper[4827]: I0126 09:27:06.294763 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerStarted","Data":"a7c66a038d6fd44bdce1681494b7417a9839a354e5bb321125a620d0c501d6b1"} Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.309447 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerStarted","Data":"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b"} Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.309806 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerStarted","Data":"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46"} Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.316161 4827 generic.go:334] "Generic (PLEG): container finished" podID="c3793491-d9a6-4c4a-ad5b-00818693d5fc" containerID="21010fa8fb743688fcddf0dc21ba0f9179aff798e62e9dd41168e4396dfa9f16" exitCode=0 Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.316214 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6sqfn" event={"ID":"c3793491-d9a6-4c4a-ad5b-00818693d5fc","Type":"ContainerDied","Data":"21010fa8fb743688fcddf0dc21ba0f9179aff798e62e9dd41168e4396dfa9f16"} Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.340555 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.340534388 podStartE2EDuration="2.340534388s" podCreationTimestamp="2026-01-26 09:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:07.327858888 +0000 UTC m=+1255.976530707" watchObservedRunningTime="2026-01-26 09:27:07.340534388 +0000 UTC m=+1255.989206207" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.636572 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.636626 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.657942 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.852297 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.852703 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 09:27:07 crc kubenswrapper[4827]: I0126 09:27:07.880953 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.109482 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.178834 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.179067 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="dnsmasq-dns" containerID="cri-o://a8b763d25114e9ecce43025a46304953346ea5f95ce4ab3b5465fd8260fa1f7c" gracePeriod=10 Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.331810 4827 generic.go:334] "Generic (PLEG): container finished" podID="4762a9bd-9e83-4616-a70b-3c53f1d4147c" containerID="67571f445348db8200d8412b7b40a1dc5be9649cec2a4e038740b7e409335df8" exitCode=0 Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.331863 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" event={"ID":"4762a9bd-9e83-4616-a70b-3c53f1d4147c","Type":"ContainerDied","Data":"67571f445348db8200d8412b7b40a1dc5be9649cec2a4e038740b7e409335df8"} Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.337620 4827 generic.go:334] "Generic (PLEG): container finished" podID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerID="a8b763d25114e9ecce43025a46304953346ea5f95ce4ab3b5465fd8260fa1f7c" exitCode=0 Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.338469 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" event={"ID":"562ed53d-de3c-4e5b-9385-05d2564d587a","Type":"ContainerDied","Data":"a8b763d25114e9ecce43025a46304953346ea5f95ce4ab3b5465fd8260fa1f7c"} Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.387952 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.475276 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.597973 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.679964 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.721315 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.964219 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:27:08 crc kubenswrapper[4827]: I0126 09:27:08.970693 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099355 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data\") pod \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099412 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config\") pod \"562ed53d-de3c-4e5b-9385-05d2564d587a\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099443 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb\") pod \"562ed53d-de3c-4e5b-9385-05d2564d587a\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxn6\" (UniqueName: \"kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6\") pod \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099539 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb\") pod \"562ed53d-de3c-4e5b-9385-05d2564d587a\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099592 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle\") pod \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099669 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts\") pod \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\" (UID: \"c3793491-d9a6-4c4a-ad5b-00818693d5fc\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099775 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc\") pod \"562ed53d-de3c-4e5b-9385-05d2564d587a\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.099865 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntg72\" (UniqueName: \"kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72\") pod \"562ed53d-de3c-4e5b-9385-05d2564d587a\" (UID: \"562ed53d-de3c-4e5b-9385-05d2564d587a\") " Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.115580 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6" (OuterVolumeSpecName: "kube-api-access-wwxn6") pod "c3793491-d9a6-4c4a-ad5b-00818693d5fc" (UID: "c3793491-d9a6-4c4a-ad5b-00818693d5fc"). InnerVolumeSpecName "kube-api-access-wwxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.115922 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72" (OuterVolumeSpecName: "kube-api-access-ntg72") pod "562ed53d-de3c-4e5b-9385-05d2564d587a" (UID: "562ed53d-de3c-4e5b-9385-05d2564d587a"). InnerVolumeSpecName "kube-api-access-ntg72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.138873 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts" (OuterVolumeSpecName: "scripts") pod "c3793491-d9a6-4c4a-ad5b-00818693d5fc" (UID: "c3793491-d9a6-4c4a-ad5b-00818693d5fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.145726 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data" (OuterVolumeSpecName: "config-data") pod "c3793491-d9a6-4c4a-ad5b-00818693d5fc" (UID: "c3793491-d9a6-4c4a-ad5b-00818693d5fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.150082 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3793491-d9a6-4c4a-ad5b-00818693d5fc" (UID: "c3793491-d9a6-4c4a-ad5b-00818693d5fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.170934 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "562ed53d-de3c-4e5b-9385-05d2564d587a" (UID: "562ed53d-de3c-4e5b-9385-05d2564d587a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.191245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "562ed53d-de3c-4e5b-9385-05d2564d587a" (UID: "562ed53d-de3c-4e5b-9385-05d2564d587a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.191904 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config" (OuterVolumeSpecName: "config") pod "562ed53d-de3c-4e5b-9385-05d2564d587a" (UID: "562ed53d-de3c-4e5b-9385-05d2564d587a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.198188 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "562ed53d-de3c-4e5b-9385-05d2564d587a" (UID: "562ed53d-de3c-4e5b-9385-05d2564d587a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204575 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204652 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204669 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204688 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxn6\" (UniqueName: \"kubernetes.io/projected/c3793491-d9a6-4c4a-ad5b-00818693d5fc-kube-api-access-wwxn6\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204700 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204714 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204727 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3793491-d9a6-4c4a-ad5b-00818693d5fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204739 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562ed53d-de3c-4e5b-9385-05d2564d587a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.204750 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntg72\" (UniqueName: \"kubernetes.io/projected/562ed53d-de3c-4e5b-9385-05d2564d587a-kube-api-access-ntg72\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.348559 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" event={"ID":"562ed53d-de3c-4e5b-9385-05d2564d587a","Type":"ContainerDied","Data":"cca2c58227e2c8872e1bca83239769d6fe4636e0af6867083d7382a12d98e8fc"} Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.348582 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-4fhpr" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.348620 4827 scope.go:117] "RemoveContainer" containerID="a8b763d25114e9ecce43025a46304953346ea5f95ce4ab3b5465fd8260fa1f7c" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.383310 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.394514 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6sqfn" event={"ID":"c3793491-d9a6-4c4a-ad5b-00818693d5fc","Type":"ContainerDied","Data":"e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2"} Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.394580 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06753b02f2ae26bb8937d478a2d4d8997de8c97dca13e3eb79eb794e0f0dce2" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.394788 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6sqfn" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.402587 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-4fhpr"] Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.443979 4827 scope.go:117] "RemoveContainer" containerID="d4ebfbbbeab0d3fb68564e3184559fe52d26888027c7ba90986c3238ed2f287b" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.584822 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.585232 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-log" containerID="cri-o://591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022" gracePeriod=30 Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.585588 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-api" containerID="cri-o://8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45" gracePeriod=30 Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.622409 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.622656 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-log" containerID="cri-o://a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46" gracePeriod=30 Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.623165 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-metadata" containerID="cri-o://691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b" gracePeriod=30 Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.715515 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" path="/var/lib/kubelet/pods/562ed53d-de3c-4e5b-9385-05d2564d587a/volumes" Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.777069 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:09 crc kubenswrapper[4827]: I0126 09:27:09.913862 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.027451 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data\") pod \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.027499 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59lm\" (UniqueName: \"kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm\") pod \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.027650 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle\") pod \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.027755 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts\") pod \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\" (UID: \"4762a9bd-9e83-4616-a70b-3c53f1d4147c\") " Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.034781 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts" (OuterVolumeSpecName: "scripts") pod "4762a9bd-9e83-4616-a70b-3c53f1d4147c" (UID: "4762a9bd-9e83-4616-a70b-3c53f1d4147c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.034930 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm" (OuterVolumeSpecName: "kube-api-access-z59lm") pod "4762a9bd-9e83-4616-a70b-3c53f1d4147c" (UID: "4762a9bd-9e83-4616-a70b-3c53f1d4147c"). InnerVolumeSpecName "kube-api-access-z59lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.061927 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4762a9bd-9e83-4616-a70b-3c53f1d4147c" (UID: "4762a9bd-9e83-4616-a70b-3c53f1d4147c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.078439 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data" (OuterVolumeSpecName: "config-data") pod "4762a9bd-9e83-4616-a70b-3c53f1d4147c" (UID: "4762a9bd-9e83-4616-a70b-3c53f1d4147c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.130372 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.130403 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.130414 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59lm\" (UniqueName: \"kubernetes.io/projected/4762a9bd-9e83-4616-a70b-3c53f1d4147c-kube-api-access-z59lm\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.130424 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4762a9bd-9e83-4616-a70b-3c53f1d4147c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.402709 4827 generic.go:334] "Generic (PLEG): container finished" podID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerID="a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46" exitCode=143 Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.402748 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerDied","Data":"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46"} Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.414727 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" event={"ID":"4762a9bd-9e83-4616-a70b-3c53f1d4147c","Type":"ContainerDied","Data":"1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839"} Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.414782 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac2a25c090918ad2fd0bd9fd76214aae38a3c1f531d898d581b363d0f46f839" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.414745 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wj8g6" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.444876 4827 generic.go:334] "Generic (PLEG): container finished" podID="11934595-5099-4b13-b713-8b041bf2f130" containerID="591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022" exitCode=143 Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.445700 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerDied","Data":"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022"} Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.463598 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 09:27:10 crc kubenswrapper[4827]: E0126 09:27:10.463971 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3793491-d9a6-4c4a-ad5b-00818693d5fc" containerName="nova-manage" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.463985 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3793491-d9a6-4c4a-ad5b-00818693d5fc" containerName="nova-manage" Jan 26 09:27:10 crc kubenswrapper[4827]: E0126 09:27:10.463998 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="dnsmasq-dns" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464005 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="dnsmasq-dns" Jan 26 09:27:10 crc kubenswrapper[4827]: E0126 09:27:10.464016 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="init" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464022 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="init" Jan 26 09:27:10 crc kubenswrapper[4827]: E0126 09:27:10.464045 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4762a9bd-9e83-4616-a70b-3c53f1d4147c" containerName="nova-cell1-conductor-db-sync" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464052 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4762a9bd-9e83-4616-a70b-3c53f1d4147c" containerName="nova-cell1-conductor-db-sync" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464307 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3793491-d9a6-4c4a-ad5b-00818693d5fc" containerName="nova-manage" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464319 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="562ed53d-de3c-4e5b-9385-05d2564d587a" containerName="dnsmasq-dns" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464338 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4762a9bd-9e83-4616-a70b-3c53f1d4147c" containerName="nova-cell1-conductor-db-sync" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.464895 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.468033 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.491855 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.539831 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndl5s\" (UniqueName: \"kubernetes.io/projected/a7163549-12a8-403d-b952-a03566f40771-kube-api-access-ndl5s\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.540760 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.541005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.642652 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.642760 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndl5s\" (UniqueName: \"kubernetes.io/projected/a7163549-12a8-403d-b952-a03566f40771-kube-api-access-ndl5s\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.642835 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.646302 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.656856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7163549-12a8-403d-b952-a03566f40771-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.688768 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndl5s\" (UniqueName: \"kubernetes.io/projected/a7163549-12a8-403d-b952-a03566f40771-kube-api-access-ndl5s\") pod \"nova-cell1-conductor-0\" (UID: \"a7163549-12a8-403d-b952-a03566f40771\") " pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.698301 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.698352 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:27:10 crc kubenswrapper[4827]: I0126 09:27:10.837161 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:11 crc kubenswrapper[4827]: I0126 09:27:11.349181 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 09:27:11 crc kubenswrapper[4827]: W0126 09:27:11.357920 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7163549_12a8_403d_b952_a03566f40771.slice/crio-4a2a4307c0e380b9a08c4386910db49c25315e744e5e0345c7c3ac0bbcb3557b WatchSource:0}: Error finding container 4a2a4307c0e380b9a08c4386910db49c25315e744e5e0345c7c3ac0bbcb3557b: Status 404 returned error can't find the container with id 4a2a4307c0e380b9a08c4386910db49c25315e744e5e0345c7c3ac0bbcb3557b Jan 26 09:27:11 crc kubenswrapper[4827]: I0126 09:27:11.458027 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a7163549-12a8-403d-b952-a03566f40771","Type":"ContainerStarted","Data":"4a2a4307c0e380b9a08c4386910db49c25315e744e5e0345c7c3ac0bbcb3557b"} Jan 26 09:27:11 crc kubenswrapper[4827]: I0126 09:27:11.459263 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerName="nova-scheduler-scheduler" containerID="cri-o://17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" gracePeriod=30 Jan 26 09:27:11 crc kubenswrapper[4827]: E0126 09:27:11.784961 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.268548 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.403975 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data\") pod \"52dc3b7b-13c0-4e66-abc8-b450be207a11\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.404067 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcrmp\" (UniqueName: \"kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp\") pod \"52dc3b7b-13c0-4e66-abc8-b450be207a11\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.404279 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle\") pod \"52dc3b7b-13c0-4e66-abc8-b450be207a11\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.404351 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs\") pod \"52dc3b7b-13c0-4e66-abc8-b450be207a11\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.404386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs\") pod \"52dc3b7b-13c0-4e66-abc8-b450be207a11\" (UID: \"52dc3b7b-13c0-4e66-abc8-b450be207a11\") " Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.406395 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs" (OuterVolumeSpecName: "logs") pod "52dc3b7b-13c0-4e66-abc8-b450be207a11" (UID: "52dc3b7b-13c0-4e66-abc8-b450be207a11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.411978 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp" (OuterVolumeSpecName: "kube-api-access-tcrmp") pod "52dc3b7b-13c0-4e66-abc8-b450be207a11" (UID: "52dc3b7b-13c0-4e66-abc8-b450be207a11"). InnerVolumeSpecName "kube-api-access-tcrmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.444115 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52dc3b7b-13c0-4e66-abc8-b450be207a11" (UID: "52dc3b7b-13c0-4e66-abc8-b450be207a11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.445237 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data" (OuterVolumeSpecName: "config-data") pod "52dc3b7b-13c0-4e66-abc8-b450be207a11" (UID: "52dc3b7b-13c0-4e66-abc8-b450be207a11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.471129 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "52dc3b7b-13c0-4e66-abc8-b450be207a11" (UID: "52dc3b7b-13c0-4e66-abc8-b450be207a11"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.474272 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.475666 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerDied","Data":"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b"} Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.475754 4827 scope.go:117] "RemoveContainer" containerID="691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.474162 4827 generic.go:334] "Generic (PLEG): container finished" podID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerID="691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b" exitCode=0 Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.477883 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52dc3b7b-13c0-4e66-abc8-b450be207a11","Type":"ContainerDied","Data":"a7c66a038d6fd44bdce1681494b7417a9839a354e5bb321125a620d0c501d6b1"} Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.483831 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a7163549-12a8-403d-b952-a03566f40771","Type":"ContainerStarted","Data":"40d2b090bfbc522f1e64efbd7e5cfd1da90cc09544cf99143a135864a865fbdb"} Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.484571 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.506568 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcrmp\" (UniqueName: \"kubernetes.io/projected/52dc3b7b-13c0-4e66-abc8-b450be207a11-kube-api-access-tcrmp\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.506594 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.506603 4827 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.506610 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc3b7b-13c0-4e66-abc8-b450be207a11-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.506619 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc3b7b-13c0-4e66-abc8-b450be207a11-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.515502 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.515488335 podStartE2EDuration="2.515488335s" podCreationTimestamp="2026-01-26 09:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:12.505487588 +0000 UTC m=+1261.154159407" watchObservedRunningTime="2026-01-26 09:27:12.515488335 +0000 UTC m=+1261.164160154" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.562838 4827 scope.go:117] "RemoveContainer" containerID="a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.573947 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.583323 4827 scope.go:117] "RemoveContainer" containerID="691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b" Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.584584 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b\": container with ID starting with 691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b not found: ID does not exist" containerID="691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.584622 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b"} err="failed to get container status \"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b\": rpc error: code = NotFound desc = could not find container \"691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b\": container with ID starting with 691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b not found: ID does not exist" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.584673 4827 scope.go:117] "RemoveContainer" containerID="a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.584760 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.585268 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46\": container with ID starting with a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46 not found: ID does not exist" containerID="a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.585299 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46"} err="failed to get container status \"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46\": rpc error: code = NotFound desc = could not find container \"a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46\": container with ID starting with a2b28161ef124fadd68587987bf21d10733ef3b9f3926151537c315696590c46 not found: ID does not exist" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.596252 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.596809 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-log" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.596840 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-log" Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.596862 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-metadata" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.596870 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-metadata" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.597144 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-metadata" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.597170 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" containerName="nova-metadata-log" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.598264 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.602767 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.602869 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.654977 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.709831 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.709901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc4z\" (UniqueName: \"kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.709927 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.709957 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.710012 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.811535 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.811622 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfc4z\" (UniqueName: \"kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.811667 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.811706 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.811769 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.814134 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.824597 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.825527 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.830013 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.833159 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfc4z\" (UniqueName: \"kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z\") pod \"nova-metadata-0\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " pod="openstack/nova-metadata-0" Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.855934 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.857799 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.860438 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 09:27:12 crc kubenswrapper[4827]: E0126 09:27:12.860597 4827 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerName="nova-scheduler-scheduler" Jan 26 09:27:12 crc kubenswrapper[4827]: I0126 09:27:12.926286 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:13 crc kubenswrapper[4827]: I0126 09:27:13.369432 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:13 crc kubenswrapper[4827]: I0126 09:27:13.494705 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerStarted","Data":"003791cd1ca7cbf16a93926f1ee0de55b6bb0b69941ee419be47a1896f419263"} Jan 26 09:27:13 crc kubenswrapper[4827]: I0126 09:27:13.715267 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dc3b7b-13c0-4e66-abc8-b450be207a11" path="/var/lib/kubelet/pods/52dc3b7b-13c0-4e66-abc8-b450be207a11/volumes" Jan 26 09:27:14 crc kubenswrapper[4827]: I0126 09:27:14.505544 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerStarted","Data":"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc"} Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.519136 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerStarted","Data":"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d"} Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.524013 4827 generic.go:334] "Generic (PLEG): container finished" podID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerID="17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" exitCode=0 Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.524114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aabf68e1-11ae-46df-b2b5-d8ca6005f427","Type":"ContainerDied","Data":"17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea"} Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.525974 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.526682 4827 generic.go:334] "Generic (PLEG): container finished" podID="11934595-5099-4b13-b713-8b041bf2f130" containerID="8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45" exitCode=0 Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.526731 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerDied","Data":"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45"} Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.526804 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11934595-5099-4b13-b713-8b041bf2f130","Type":"ContainerDied","Data":"efa24ea516c4a109785b4b3da8eb9880d1bad6a1827679460e24c07ce50ec8e3"} Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.526841 4827 scope.go:117] "RemoveContainer" containerID="8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.556834 4827 scope.go:117] "RemoveContainer" containerID="591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.560399 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.560377847 podStartE2EDuration="3.560377847s" podCreationTimestamp="2026-01-26 09:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:15.543198367 +0000 UTC m=+1264.191870186" watchObservedRunningTime="2026-01-26 09:27:15.560377847 +0000 UTC m=+1264.209049676" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.585010 4827 scope.go:117] "RemoveContainer" containerID="8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45" Jan 26 09:27:15 crc kubenswrapper[4827]: E0126 09:27:15.594153 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45\": container with ID starting with 8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45 not found: ID does not exist" containerID="8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.594315 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45"} err="failed to get container status \"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45\": rpc error: code = NotFound desc = could not find container \"8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45\": container with ID starting with 8135020747e01efde1a945a4add4aeb13300f150664808a399c3921018351c45 not found: ID does not exist" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.594421 4827 scope.go:117] "RemoveContainer" containerID="591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022" Jan 26 09:27:15 crc kubenswrapper[4827]: E0126 09:27:15.594878 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022\": container with ID starting with 591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022 not found: ID does not exist" containerID="591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.594921 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022"} err="failed to get container status \"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022\": rpc error: code = NotFound desc = could not find container \"591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022\": container with ID starting with 591a76dcad791c1237c4d6bb9f8f563bffcd20178ef9d5707394f4f6a6687022 not found: ID does not exist" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.670983 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle\") pod \"11934595-5099-4b13-b713-8b041bf2f130\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.671404 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data\") pod \"11934595-5099-4b13-b713-8b041bf2f130\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.671612 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2prb\" (UniqueName: \"kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb\") pod \"11934595-5099-4b13-b713-8b041bf2f130\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.671823 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs\") pod \"11934595-5099-4b13-b713-8b041bf2f130\" (UID: \"11934595-5099-4b13-b713-8b041bf2f130\") " Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.672615 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs" (OuterVolumeSpecName: "logs") pod "11934595-5099-4b13-b713-8b041bf2f130" (UID: "11934595-5099-4b13-b713-8b041bf2f130"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.673949 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11934595-5099-4b13-b713-8b041bf2f130-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.676437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb" (OuterVolumeSpecName: "kube-api-access-f2prb") pod "11934595-5099-4b13-b713-8b041bf2f130" (UID: "11934595-5099-4b13-b713-8b041bf2f130"). InnerVolumeSpecName "kube-api-access-f2prb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.703515 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11934595-5099-4b13-b713-8b041bf2f130" (UID: "11934595-5099-4b13-b713-8b041bf2f130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.716932 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data" (OuterVolumeSpecName: "config-data") pod "11934595-5099-4b13-b713-8b041bf2f130" (UID: "11934595-5099-4b13-b713-8b041bf2f130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.778022 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.778194 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11934595-5099-4b13-b713-8b041bf2f130-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:15 crc kubenswrapper[4827]: I0126 09:27:15.778206 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2prb\" (UniqueName: \"kubernetes.io/projected/11934595-5099-4b13-b713-8b041bf2f130-kube-api-access-f2prb\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.347508 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.494092 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tjgw\" (UniqueName: \"kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw\") pod \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.494180 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle\") pod \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.494274 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data\") pod \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\" (UID: \"aabf68e1-11ae-46df-b2b5-d8ca6005f427\") " Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.512806 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw" (OuterVolumeSpecName: "kube-api-access-2tjgw") pod "aabf68e1-11ae-46df-b2b5-d8ca6005f427" (UID: "aabf68e1-11ae-46df-b2b5-d8ca6005f427"). InnerVolumeSpecName "kube-api-access-2tjgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.543626 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.569347 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.569410 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aabf68e1-11ae-46df-b2b5-d8ca6005f427","Type":"ContainerDied","Data":"603e2dc81668c80e41d5ee1195b9e1e7d978fbcbeebff8252c9113894d87f69e"} Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.569465 4827 scope.go:117] "RemoveContainer" containerID="17b09a11c39be5ff8fe23c65140e580c40f5d17794fa04742015bf64a58cd7ea" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.584833 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabf68e1-11ae-46df-b2b5-d8ca6005f427" (UID: "aabf68e1-11ae-46df-b2b5-d8ca6005f427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.587939 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data" (OuterVolumeSpecName: "config-data") pod "aabf68e1-11ae-46df-b2b5-d8ca6005f427" (UID: "aabf68e1-11ae-46df-b2b5-d8ca6005f427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.605161 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tjgw\" (UniqueName: \"kubernetes.io/projected/aabf68e1-11ae-46df-b2b5-d8ca6005f427-kube-api-access-2tjgw\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.605198 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.605215 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabf68e1-11ae-46df-b2b5-d8ca6005f427-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.668919 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.687842 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.693085 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: E0126 09:27:16.693841 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-api" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.693876 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-api" Jan 26 09:27:16 crc kubenswrapper[4827]: E0126 09:27:16.693895 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-log" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.693903 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-log" Jan 26 09:27:16 crc kubenswrapper[4827]: E0126 09:27:16.693920 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerName="nova-scheduler-scheduler" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.693930 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerName="nova-scheduler-scheduler" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.694146 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-log" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.694169 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="11934595-5099-4b13-b713-8b041bf2f130" containerName="nova-api-api" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.694184 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" containerName="nova-scheduler-scheduler" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.695559 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.698607 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.707005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.707128 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.707212 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpxgk\" (UniqueName: \"kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.707281 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.711978 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.812918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.813004 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.813046 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.813088 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpxgk\" (UniqueName: \"kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.813381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.820564 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.820779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.846614 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpxgk\" (UniqueName: \"kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk\") pod \"nova-api-0\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " pod="openstack/nova-api-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.911025 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.919270 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.933136 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.934415 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.936724 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 09:27:16 crc kubenswrapper[4827]: I0126 09:27:16.941530 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.013362 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.118150 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcjl\" (UniqueName: \"kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.118265 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.118310 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.220281 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.220366 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fcjl\" (UniqueName: \"kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.220469 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.228756 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.238129 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.249198 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fcjl\" (UniqueName: \"kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl\") pod \"nova-scheduler-0\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.258399 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.553419 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.578537 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerStarted","Data":"9c392d3afffb0be4216cbab9cd5241d56f760011a80271397ebc774799f214d7"} Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.718885 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11934595-5099-4b13-b713-8b041bf2f130" path="/var/lib/kubelet/pods/11934595-5099-4b13-b713-8b041bf2f130/volumes" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.721103 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabf68e1-11ae-46df-b2b5-d8ca6005f427" path="/var/lib/kubelet/pods/aabf68e1-11ae-46df-b2b5-d8ca6005f427/volumes" Jan 26 09:27:17 crc kubenswrapper[4827]: W0126 09:27:17.729423 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5713d106_6a55_46a6_9e9b_a0f937420e03.slice/crio-ef579dffe73f3b536ed53616db2d6641502f95e2daa5bc1d91f902fb72ea9bd3 WatchSource:0}: Error finding container ef579dffe73f3b536ed53616db2d6641502f95e2daa5bc1d91f902fb72ea9bd3: Status 404 returned error can't find the container with id ef579dffe73f3b536ed53616db2d6641502f95e2daa5bc1d91f902fb72ea9bd3 Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.732123 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.927724 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:27:17 crc kubenswrapper[4827]: I0126 09:27:17.927981 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.588559 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5713d106-6a55-46a6-9e9b-a0f937420e03","Type":"ContainerStarted","Data":"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566"} Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.588882 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5713d106-6a55-46a6-9e9b-a0f937420e03","Type":"ContainerStarted","Data":"ef579dffe73f3b536ed53616db2d6641502f95e2daa5bc1d91f902fb72ea9bd3"} Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.610524 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6105041289999997 podStartE2EDuration="2.610504129s" podCreationTimestamp="2026-01-26 09:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:18.608849474 +0000 UTC m=+1267.257521293" watchObservedRunningTime="2026-01-26 09:27:18.610504129 +0000 UTC m=+1267.259175938" Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.613138 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerStarted","Data":"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5"} Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.613199 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerStarted","Data":"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901"} Jan 26 09:27:18 crc kubenswrapper[4827]: I0126 09:27:18.646784 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.64676556 podStartE2EDuration="2.64676556s" podCreationTimestamp="2026-01-26 09:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:18.634170492 +0000 UTC m=+1267.282842361" watchObservedRunningTime="2026-01-26 09:27:18.64676556 +0000 UTC m=+1267.295437389" Jan 26 09:27:20 crc kubenswrapper[4827]: I0126 09:27:20.867733 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 09:27:21 crc kubenswrapper[4827]: E0126 09:27:21.993553 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:27:22 crc kubenswrapper[4827]: I0126 09:27:22.259125 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 09:27:22 crc kubenswrapper[4827]: I0126 09:27:22.927409 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 09:27:22 crc kubenswrapper[4827]: I0126 09:27:22.928515 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 09:27:23 crc kubenswrapper[4827]: I0126 09:27:23.963920 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:23 crc kubenswrapper[4827]: I0126 09:27:23.963920 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:27 crc kubenswrapper[4827]: I0126 09:27:27.014435 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:27:27 crc kubenswrapper[4827]: I0126 09:27:27.014835 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:27:27 crc kubenswrapper[4827]: I0126 09:27:27.259100 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 09:27:27 crc kubenswrapper[4827]: I0126 09:27:27.291893 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 09:27:27 crc kubenswrapper[4827]: I0126 09:27:27.737192 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 09:27:28 crc kubenswrapper[4827]: I0126 09:27:28.096884 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:28 crc kubenswrapper[4827]: I0126 09:27:28.096909 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 09:27:32 crc kubenswrapper[4827]: E0126 09:27:32.226786 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:27:32 crc kubenswrapper[4827]: I0126 09:27:32.934927 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 09:27:32 crc kubenswrapper[4827]: I0126 09:27:32.935307 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 09:27:32 crc kubenswrapper[4827]: I0126 09:27:32.941551 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 09:27:32 crc kubenswrapper[4827]: I0126 09:27:32.941951 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.656906 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.747616 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle\") pod \"7012d90e-6e98-4755-a4c8-0711f4167fb9\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.747686 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data\") pod \"7012d90e-6e98-4755-a4c8-0711f4167fb9\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.747702 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wr6\" (UniqueName: \"kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6\") pod \"7012d90e-6e98-4755-a4c8-0711f4167fb9\" (UID: \"7012d90e-6e98-4755-a4c8-0711f4167fb9\") " Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.752927 4827 generic.go:334] "Generic (PLEG): container finished" podID="7012d90e-6e98-4755-a4c8-0711f4167fb9" containerID="dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651" exitCode=137 Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.752997 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.752971 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7012d90e-6e98-4755-a4c8-0711f4167fb9","Type":"ContainerDied","Data":"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651"} Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.753095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7012d90e-6e98-4755-a4c8-0711f4167fb9","Type":"ContainerDied","Data":"9385d09f69b28f2b80e0fa3e55bade7f471ce14347a6bd1a3151e378157f3341"} Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.753121 4827 scope.go:117] "RemoveContainer" containerID="dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.753222 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6" (OuterVolumeSpecName: "kube-api-access-k8wr6") pod "7012d90e-6e98-4755-a4c8-0711f4167fb9" (UID: "7012d90e-6e98-4755-a4c8-0711f4167fb9"). InnerVolumeSpecName "kube-api-access-k8wr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.772679 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7012d90e-6e98-4755-a4c8-0711f4167fb9" (UID: "7012d90e-6e98-4755-a4c8-0711f4167fb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.785830 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data" (OuterVolumeSpecName: "config-data") pod "7012d90e-6e98-4755-a4c8-0711f4167fb9" (UID: "7012d90e-6e98-4755-a4c8-0711f4167fb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.844733 4827 scope.go:117] "RemoveContainer" containerID="dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651" Jan 26 09:27:34 crc kubenswrapper[4827]: E0126 09:27:34.845283 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651\": container with ID starting with dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651 not found: ID does not exist" containerID="dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.845324 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651"} err="failed to get container status \"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651\": rpc error: code = NotFound desc = could not find container \"dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651\": container with ID starting with dfc24e3a2bc26bf02c5ddb742ffe347cfc7cbe2d310d31771b2e6045b5cd7651 not found: ID does not exist" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.849555 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.849582 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7012d90e-6e98-4755-a4c8-0711f4167fb9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:34 crc kubenswrapper[4827]: I0126 09:27:34.849591 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wr6\" (UniqueName: \"kubernetes.io/projected/7012d90e-6e98-4755-a4c8-0711f4167fb9-kube-api-access-k8wr6\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.090730 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.100615 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.124893 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:35 crc kubenswrapper[4827]: E0126 09:27:35.125359 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7012d90e-6e98-4755-a4c8-0711f4167fb9" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.125386 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7012d90e-6e98-4755-a4c8-0711f4167fb9" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.125595 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7012d90e-6e98-4755-a4c8-0711f4167fb9" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.126300 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.130878 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.204753 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.205238 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.205419 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.308556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.308625 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.308752 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzq9\" (UniqueName: \"kubernetes.io/projected/fd4d28c4-421a-40a9-8629-e832e0aa002f-kube-api-access-kgzq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.308797 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.308850 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.410541 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgzq9\" (UniqueName: \"kubernetes.io/projected/fd4d28c4-421a-40a9-8629-e832e0aa002f-kube-api-access-kgzq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.410602 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.410681 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.410754 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.410806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.422627 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.423608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.423357 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.443919 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4d28c4-421a-40a9-8629-e832e0aa002f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.447422 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgzq9\" (UniqueName: \"kubernetes.io/projected/fd4d28c4-421a-40a9-8629-e832e0aa002f-kube-api-access-kgzq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd4d28c4-421a-40a9-8629-e832e0aa002f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.538625 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.714878 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7012d90e-6e98-4755-a4c8-0711f4167fb9" path="/var/lib/kubelet/pods/7012d90e-6e98-4755-a4c8-0711f4167fb9/volumes" Jan 26 09:27:35 crc kubenswrapper[4827]: I0126 09:27:35.981805 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 09:27:36 crc kubenswrapper[4827]: I0126 09:27:36.773522 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd4d28c4-421a-40a9-8629-e832e0aa002f","Type":"ContainerStarted","Data":"ca0615974dd8364132e7d3338fd8b9116c6efff418e59fc1520aa2ba7290f745"} Jan 26 09:27:36 crc kubenswrapper[4827]: I0126 09:27:36.773571 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd4d28c4-421a-40a9-8629-e832e0aa002f","Type":"ContainerStarted","Data":"7a2897c67370a79733d4f93cca5989d42e0f3c4fc4b89040c5b90db50cb42673"} Jan 26 09:27:36 crc kubenswrapper[4827]: I0126 09:27:36.798179 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.798144979 podStartE2EDuration="1.798144979s" podCreationTimestamp="2026-01-26 09:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:36.796556347 +0000 UTC m=+1285.445228166" watchObservedRunningTime="2026-01-26 09:27:36.798144979 +0000 UTC m=+1285.446816878" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.017387 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.017922 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.018517 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.022063 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.781437 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.785176 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.982110 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:27:37 crc kubenswrapper[4827]: I0126 09:27:37.983561 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.013878 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.057577 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.057638 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.057790 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.057850 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wds2\" (UniqueName: \"kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.057874 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.159068 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wds2\" (UniqueName: \"kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.159115 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.159167 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.159192 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.159278 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.160604 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.160630 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.160674 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.160870 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.190534 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wds2\" (UniqueName: \"kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2\") pod \"dnsmasq-dns-78c596d7cf-clzww\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.313064 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:38 crc kubenswrapper[4827]: I0126 09:27:38.803489 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:27:38 crc kubenswrapper[4827]: W0126 09:27:38.842796 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66959c4_eb10_4fe0_ba7d_ac0f1c3c1baa.slice/crio-33a1eac3a0c006b4224e8b39f3760b389df4699382949198427f1c098da71a94 WatchSource:0}: Error finding container 33a1eac3a0c006b4224e8b39f3760b389df4699382949198427f1c098da71a94: Status 404 returned error can't find the container with id 33a1eac3a0c006b4224e8b39f3760b389df4699382949198427f1c098da71a94 Jan 26 09:27:39 crc kubenswrapper[4827]: I0126 09:27:39.828097 4827 generic.go:334] "Generic (PLEG): container finished" podID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerID="4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628" exitCode=0 Jan 26 09:27:39 crc kubenswrapper[4827]: I0126 09:27:39.828233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" event={"ID":"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa","Type":"ContainerDied","Data":"4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628"} Jan 26 09:27:39 crc kubenswrapper[4827]: I0126 09:27:39.829382 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" event={"ID":"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa","Type":"ContainerStarted","Data":"33a1eac3a0c006b4224e8b39f3760b389df4699382949198427f1c098da71a94"} Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.342493 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.539510 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.660538 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.661062 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-central-agent" containerID="cri-o://80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.661259 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="sg-core" containerID="cri-o://a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.661290 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-notification-agent" containerID="cri-o://8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.661540 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="proxy-httpd" containerID="cri-o://94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.839432 4827 generic.go:334] "Generic (PLEG): container finished" podID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerID="94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b" exitCode=0 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.839458 4827 generic.go:334] "Generic (PLEG): container finished" podID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerID="a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1" exitCode=2 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.839497 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerDied","Data":"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b"} Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.839521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerDied","Data":"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1"} Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.847320 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-log" containerID="cri-o://3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.848233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" event={"ID":"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa","Type":"ContainerStarted","Data":"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4"} Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.848264 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.848519 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-api" containerID="cri-o://56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5" gracePeriod=30 Jan 26 09:27:40 crc kubenswrapper[4827]: I0126 09:27:40.874695 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" podStartSLOduration=3.874635153 podStartE2EDuration="3.874635153s" podCreationTimestamp="2026-01-26 09:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:40.872071325 +0000 UTC m=+1289.520743144" watchObservedRunningTime="2026-01-26 09:27:40.874635153 +0000 UTC m=+1289.523306972" Jan 26 09:27:41 crc kubenswrapper[4827]: I0126 09:27:41.857548 4827 generic.go:334] "Generic (PLEG): container finished" podID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerID="80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d" exitCode=0 Jan 26 09:27:41 crc kubenswrapper[4827]: I0126 09:27:41.857929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerDied","Data":"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d"} Jan 26 09:27:41 crc kubenswrapper[4827]: I0126 09:27:41.860364 4827 generic.go:334] "Generic (PLEG): container finished" podID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerID="3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901" exitCode=143 Jan 26 09:27:41 crc kubenswrapper[4827]: I0126 09:27:41.860425 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerDied","Data":"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901"} Jan 26 09:27:42 crc kubenswrapper[4827]: I0126 09:27:42.647975 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:27:42 crc kubenswrapper[4827]: I0126 09:27:42.648020 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:27:42 crc kubenswrapper[4827]: E0126 09:27:42.916763 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.730127 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870099 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870511 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870571 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870633 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870686 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870741 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98nj9\" (UniqueName: \"kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.870812 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts\") pod \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\" (UID: \"20979b5a-e7a6-4524-a19b-5b38ba94ef2c\") " Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.872508 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.872833 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.884608 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9" (OuterVolumeSpecName: "kube-api-access-98nj9") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "kube-api-access-98nj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.886133 4827 generic.go:334] "Generic (PLEG): container finished" podID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerID="8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba" exitCode=0 Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.886226 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.886239 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerDied","Data":"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba"} Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.886588 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20979b5a-e7a6-4524-a19b-5b38ba94ef2c","Type":"ContainerDied","Data":"15fe8c6db2e22671411d59b02082f4f8cdf305ac8086ea6e1b4745b2ff4e9b94"} Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.886685 4827 scope.go:117] "RemoveContainer" containerID="94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.896102 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts" (OuterVolumeSpecName: "scripts") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.917875 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.930873 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973347 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973380 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973390 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973398 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973406 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98nj9\" (UniqueName: \"kubernetes.io/projected/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-kube-api-access-98nj9\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.973418 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.976806 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:43 crc kubenswrapper[4827]: I0126 09:27:43.997388 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data" (OuterVolumeSpecName: "config-data") pod "20979b5a-e7a6-4524-a19b-5b38ba94ef2c" (UID: "20979b5a-e7a6-4524-a19b-5b38ba94ef2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.002208 4827 scope.go:117] "RemoveContainer" containerID="a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.022597 4827 scope.go:117] "RemoveContainer" containerID="8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.074796 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.074833 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20979b5a-e7a6-4524-a19b-5b38ba94ef2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.105067 4827 scope.go:117] "RemoveContainer" containerID="80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.124756 4827 scope.go:117] "RemoveContainer" containerID="94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.126093 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b\": container with ID starting with 94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b not found: ID does not exist" containerID="94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126130 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b"} err="failed to get container status \"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b\": rpc error: code = NotFound desc = could not find container \"94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b\": container with ID starting with 94b3a621066d44cd64c55b9301e851ec15a653fda8e47377e65b7af349ce855b not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126156 4827 scope.go:117] "RemoveContainer" containerID="a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.126440 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1\": container with ID starting with a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1 not found: ID does not exist" containerID="a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126466 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1"} err="failed to get container status \"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1\": rpc error: code = NotFound desc = could not find container \"a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1\": container with ID starting with a7e3d789c95de1f1feecd088819a3cdb219846fdd1e1ac506c4bf2b4bfb782b1 not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126483 4827 scope.go:117] "RemoveContainer" containerID="8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.126803 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba\": container with ID starting with 8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba not found: ID does not exist" containerID="8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126830 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba"} err="failed to get container status \"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba\": rpc error: code = NotFound desc = could not find container \"8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba\": container with ID starting with 8385e6afc4a38ebad515a8ba83d7db4b64ebe93a8a430e0b346327cdc8b4a9ba not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.126884 4827 scope.go:117] "RemoveContainer" containerID="80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.127144 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d\": container with ID starting with 80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d not found: ID does not exist" containerID="80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.127175 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d"} err="failed to get container status \"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d\": rpc error: code = NotFound desc = could not find container \"80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d\": container with ID starting with 80178a4a0305b59e818dd8c1938ecbc8460488d8508176358f7fa0055145f34d not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.250393 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.261218 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281268 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.281614 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="sg-core" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281627 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="sg-core" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.281671 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-notification-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281677 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-notification-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.281693 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="proxy-httpd" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281699 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="proxy-httpd" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.281709 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-central-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281715 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-central-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281879 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="proxy-httpd" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281890 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-central-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281899 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="ceilometer-notification-agent" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.281908 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" containerName="sg-core" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.283315 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.290250 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.290406 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.292479 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.306415 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.336419 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386194 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386456 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386513 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjmj\" (UniqueName: \"kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386569 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386586 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.386670 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487294 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle\") pod \"ff3d8932-2fd7-4a7b-9588-80826729ba68\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487459 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpxgk\" (UniqueName: \"kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk\") pod \"ff3d8932-2fd7-4a7b-9588-80826729ba68\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487577 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs\") pod \"ff3d8932-2fd7-4a7b-9588-80826729ba68\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487605 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data\") pod \"ff3d8932-2fd7-4a7b-9588-80826729ba68\" (UID: \"ff3d8932-2fd7-4a7b-9588-80826729ba68\") " Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487914 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487960 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.487985 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pjmj\" (UniqueName: \"kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488052 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488141 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.488689 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.489183 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs" (OuterVolumeSpecName: "logs") pod "ff3d8932-2fd7-4a7b-9588-80826729ba68" (UID: "ff3d8932-2fd7-4a7b-9588-80826729ba68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.491491 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.495596 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.495842 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk" (OuterVolumeSpecName: "kube-api-access-kpxgk") pod "ff3d8932-2fd7-4a7b-9588-80826729ba68" (UID: "ff3d8932-2fd7-4a7b-9588-80826729ba68"). InnerVolumeSpecName "kube-api-access-kpxgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.496139 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.499766 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.506324 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.511439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.511465 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pjmj\" (UniqueName: \"kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj\") pod \"ceilometer-0\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.528170 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data" (OuterVolumeSpecName: "config-data") pod "ff3d8932-2fd7-4a7b-9588-80826729ba68" (UID: "ff3d8932-2fd7-4a7b-9588-80826729ba68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.535527 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff3d8932-2fd7-4a7b-9588-80826729ba68" (UID: "ff3d8932-2fd7-4a7b-9588-80826729ba68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.593198 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.593234 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpxgk\" (UniqueName: \"kubernetes.io/projected/ff3d8932-2fd7-4a7b-9588-80826729ba68-kube-api-access-kpxgk\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.593247 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff3d8932-2fd7-4a7b-9588-80826729ba68-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.593261 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3d8932-2fd7-4a7b-9588-80826729ba68-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.615471 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.896403 4827 generic.go:334] "Generic (PLEG): container finished" podID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerID="56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5" exitCode=0 Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.896463 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.896455 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerDied","Data":"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5"} Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.896849 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff3d8932-2fd7-4a7b-9588-80826729ba68","Type":"ContainerDied","Data":"9c392d3afffb0be4216cbab9cd5241d56f760011a80271397ebc774799f214d7"} Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.896874 4827 scope.go:117] "RemoveContainer" containerID="56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.929101 4827 scope.go:117] "RemoveContainer" containerID="3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.961408 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.967822 4827 scope.go:117] "RemoveContainer" containerID="56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.970889 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5\": container with ID starting with 56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5 not found: ID does not exist" containerID="56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.970942 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5"} err="failed to get container status \"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5\": rpc error: code = NotFound desc = could not find container \"56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5\": container with ID starting with 56af8dfdb4c8a151a8a21ee6e0212a4f493a6fd7fc1ebe50ac6eda63c31afbf5 not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.970975 4827 scope.go:117] "RemoveContainer" containerID="3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.971141 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.972202 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901\": container with ID starting with 3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901 not found: ID does not exist" containerID="3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.972256 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901"} err="failed to get container status \"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901\": rpc error: code = NotFound desc = could not find container \"3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901\": container with ID starting with 3f558db86cb0a75ddaccc507051f8c74303c46306636d7202ea0d2f3b245b901 not found: ID does not exist" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.979138 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.979465 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-log" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.979481 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-log" Jan 26 09:27:44 crc kubenswrapper[4827]: E0126 09:27:44.979500 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-api" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.979507 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-api" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.979689 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-api" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.979712 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" containerName="nova-api-log" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.980584 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.984073 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.984283 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 09:27:44 crc kubenswrapper[4827]: I0126 09:27:44.985511 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001015 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001068 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001105 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001137 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001189 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.001222 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.016627 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.092603 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102394 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102476 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102515 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102607 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.102633 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.103142 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.111217 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.111567 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.111999 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.121298 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.127863 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm\") pod \"nova-api-0\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.302484 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.538839 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.571654 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.714122 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20979b5a-e7a6-4524-a19b-5b38ba94ef2c" path="/var/lib/kubelet/pods/20979b5a-e7a6-4524-a19b-5b38ba94ef2c/volumes" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.717272 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff3d8932-2fd7-4a7b-9588-80826729ba68" path="/var/lib/kubelet/pods/ff3d8932-2fd7-4a7b-9588-80826729ba68/volumes" Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.779992 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.908755 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerStarted","Data":"19c1066fb3784e6dd833d208071236d5d7dd1e328788c3161a5474ee23d3d7c6"} Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.912110 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerStarted","Data":"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e"} Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.912161 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerStarted","Data":"ff1546e18932dfdb956fb53cf62c37e36e13773b31ebe5e7db2f3127c514c77d"} Jan 26 09:27:45 crc kubenswrapper[4827]: I0126 09:27:45.937399 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.157272 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kdm7c"] Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.158683 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.163874 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.163965 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.174098 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kdm7c"] Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.343487 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdff6\" (UniqueName: \"kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.343560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.343633 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.343683 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.445486 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdff6\" (UniqueName: \"kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.445553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.445626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.445673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.451388 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.452008 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.460838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.476369 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdff6\" (UniqueName: \"kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6\") pod \"nova-cell1-cell-mapping-kdm7c\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.775736 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.925481 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerStarted","Data":"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9"} Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.925754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerStarted","Data":"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f"} Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.929900 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerStarted","Data":"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6"} Jan 26 09:27:46 crc kubenswrapper[4827]: I0126 09:27:46.929928 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerStarted","Data":"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a"} Jan 26 09:27:47 crc kubenswrapper[4827]: I0126 09:27:47.255117 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.25509829 podStartE2EDuration="3.25509829s" podCreationTimestamp="2026-01-26 09:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:46.977288712 +0000 UTC m=+1295.625960531" watchObservedRunningTime="2026-01-26 09:27:47.25509829 +0000 UTC m=+1295.903770109" Jan 26 09:27:47 crc kubenswrapper[4827]: I0126 09:27:47.256601 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kdm7c"] Jan 26 09:27:47 crc kubenswrapper[4827]: I0126 09:27:47.938399 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kdm7c" event={"ID":"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f","Type":"ContainerStarted","Data":"a394349f3c58d2735008320353db15b7185ea433f1eb7665c30002fecc993db7"} Jan 26 09:27:47 crc kubenswrapper[4827]: I0126 09:27:47.938821 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kdm7c" event={"ID":"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f","Type":"ContainerStarted","Data":"76fd3901cf42c5a3a40312394d591cdbc05ab65c9da466c6ed0f0fa2f8bc2684"} Jan 26 09:27:47 crc kubenswrapper[4827]: I0126 09:27:47.965353 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kdm7c" podStartSLOduration=1.965335118 podStartE2EDuration="1.965335118s" podCreationTimestamp="2026-01-26 09:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:47.954301113 +0000 UTC m=+1296.602972942" watchObservedRunningTime="2026-01-26 09:27:47.965335118 +0000 UTC m=+1296.614006937" Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.314825 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.375944 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.376189 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="dnsmasq-dns" containerID="cri-o://87bd9c50a25fcebb45a76249b643413fa576917391bb4420549f676614c42251" gracePeriod=10 Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.960815 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerID="87bd9c50a25fcebb45a76249b643413fa576917391bb4420549f676614c42251" exitCode=0 Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.961190 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerDied","Data":"87bd9c50a25fcebb45a76249b643413fa576917391bb4420549f676614c42251"} Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.961222 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" event={"ID":"9fee1f20-7ed7-46de-ab65-265fee29ddc4","Type":"ContainerDied","Data":"68010e2c55b8c82641a98f1491db24d5c6dd398e3b214f87c43cb7bc2eb4c1a8"} Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.961237 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68010e2c55b8c82641a98f1491db24d5c6dd398e3b214f87c43cb7bc2eb4c1a8" Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.966848 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerStarted","Data":"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a"} Jan 26 09:27:48 crc kubenswrapper[4827]: I0126 09:27:48.967105 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.025186 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.051517 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.157777388 podStartE2EDuration="5.051494891s" podCreationTimestamp="2026-01-26 09:27:44 +0000 UTC" firstStartedPulling="2026-01-26 09:27:45.101557646 +0000 UTC m=+1293.750229465" lastFinishedPulling="2026-01-26 09:27:47.995275149 +0000 UTC m=+1296.643946968" observedRunningTime="2026-01-26 09:27:49.008956692 +0000 UTC m=+1297.657628511" watchObservedRunningTime="2026-01-26 09:27:49.051494891 +0000 UTC m=+1297.700166720" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.202740 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb\") pod \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.202872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config\") pod \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.202901 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc\") pod \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.203001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb\") pod \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.203028 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d6hw\" (UniqueName: \"kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw\") pod \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\" (UID: \"9fee1f20-7ed7-46de-ab65-265fee29ddc4\") " Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.225200 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw" (OuterVolumeSpecName: "kube-api-access-6d6hw") pod "9fee1f20-7ed7-46de-ab65-265fee29ddc4" (UID: "9fee1f20-7ed7-46de-ab65-265fee29ddc4"). InnerVolumeSpecName "kube-api-access-6d6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.267206 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fee1f20-7ed7-46de-ab65-265fee29ddc4" (UID: "9fee1f20-7ed7-46de-ab65-265fee29ddc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.292614 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fee1f20-7ed7-46de-ab65-265fee29ddc4" (UID: "9fee1f20-7ed7-46de-ab65-265fee29ddc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.299486 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fee1f20-7ed7-46de-ab65-265fee29ddc4" (UID: "9fee1f20-7ed7-46de-ab65-265fee29ddc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.303849 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config" (OuterVolumeSpecName: "config") pod "9fee1f20-7ed7-46de-ab65-265fee29ddc4" (UID: "9fee1f20-7ed7-46de-ab65-265fee29ddc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.305478 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.305509 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d6hw\" (UniqueName: \"kubernetes.io/projected/9fee1f20-7ed7-46de-ab65-265fee29ddc4-kube-api-access-6d6hw\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.305526 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.305538 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.305560 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fee1f20-7ed7-46de-ab65-265fee29ddc4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.972899 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-nxk49" Jan 26 09:27:49 crc kubenswrapper[4827]: I0126 09:27:49.995507 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:27:50 crc kubenswrapper[4827]: I0126 09:27:50.004020 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-nxk49"] Jan 26 09:27:51 crc kubenswrapper[4827]: I0126 09:27:51.715370 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" path="/var/lib/kubelet/pods/9fee1f20-7ed7-46de-ab65-265fee29ddc4/volumes" Jan 26 09:27:53 crc kubenswrapper[4827]: I0126 09:27:53.016667 4827 generic.go:334] "Generic (PLEG): container finished" podID="cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" containerID="a394349f3c58d2735008320353db15b7185ea433f1eb7665c30002fecc993db7" exitCode=0 Jan 26 09:27:53 crc kubenswrapper[4827]: I0126 09:27:53.016703 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kdm7c" event={"ID":"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f","Type":"ContainerDied","Data":"a394349f3c58d2735008320353db15b7185ea433f1eb7665c30002fecc993db7"} Jan 26 09:27:53 crc kubenswrapper[4827]: E0126 09:27:53.172479 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcea72f1d_1aad_49c8_bcfe_dfb4ed1ee03f.slice/crio-conmon-a394349f3c58d2735008320353db15b7185ea433f1eb7665c30002fecc993db7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.384884 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.497270 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdff6\" (UniqueName: \"kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6\") pod \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.497440 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts\") pod \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.497498 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle\") pod \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.497531 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data\") pod \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\" (UID: \"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f\") " Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.504768 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts" (OuterVolumeSpecName: "scripts") pod "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" (UID: "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.516915 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6" (OuterVolumeSpecName: "kube-api-access-tdff6") pod "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" (UID: "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f"). InnerVolumeSpecName "kube-api-access-tdff6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.524888 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" (UID: "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.540867 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data" (OuterVolumeSpecName: "config-data") pod "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" (UID: "cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.599214 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.599249 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.599259 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:54 crc kubenswrapper[4827]: I0126 09:27:54.599268 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdff6\" (UniqueName: \"kubernetes.io/projected/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f-kube-api-access-tdff6\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.040449 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kdm7c" event={"ID":"cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f","Type":"ContainerDied","Data":"76fd3901cf42c5a3a40312394d591cdbc05ab65c9da466c6ed0f0fa2f8bc2684"} Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.040508 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76fd3901cf42c5a3a40312394d591cdbc05ab65c9da466c6ed0f0fa2f8bc2684" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.040533 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kdm7c" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.232689 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.233206 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5713d106-6a55-46a6-9e9b-a0f937420e03" containerName="nova-scheduler-scheduler" containerID="cri-o://a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566" gracePeriod=30 Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.249674 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.250299 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-log" containerID="cri-o://ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" gracePeriod=30 Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.250377 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-api" containerID="cri-o://d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" gracePeriod=30 Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.288563 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.289131 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" containerID="cri-o://aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc" gracePeriod=30 Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.289306 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" containerID="cri-o://31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d" gracePeriod=30 Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.821717 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920744 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920784 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920846 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.920865 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs\") pod \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\" (UID: \"ab7ecf0d-6e25-402e-ad0c-536ab9454c18\") " Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.921413 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs" (OuterVolumeSpecName: "logs") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.925774 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm" (OuterVolumeSpecName: "kube-api-access-h9hnm") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "kube-api-access-h9hnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.949262 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data" (OuterVolumeSpecName: "config-data") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.955148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.972575 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:55 crc kubenswrapper[4827]: I0126 09:27:55.982089 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab7ecf0d-6e25-402e-ad0c-536ab9454c18" (UID: "ab7ecf0d-6e25-402e-ad0c-536ab9454c18"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022303 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022340 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022353 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-kube-api-access-h9hnm\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022365 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022375 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.022383 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7ecf0d-6e25-402e-ad0c-536ab9454c18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.048327 4827 generic.go:334] "Generic (PLEG): container finished" podID="16e690cf-71f1-42bc-adb6-acf507816f08" containerID="aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc" exitCode=143 Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.048394 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerDied","Data":"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc"} Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.049939 4827 generic.go:334] "Generic (PLEG): container finished" podID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerID="d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" exitCode=0 Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.049964 4827 generic.go:334] "Generic (PLEG): container finished" podID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerID="ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" exitCode=143 Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.049985 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerDied","Data":"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9"} Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.049987 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.050022 4827 scope.go:117] "RemoveContainer" containerID="d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.050012 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerDied","Data":"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f"} Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.050114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab7ecf0d-6e25-402e-ad0c-536ab9454c18","Type":"ContainerDied","Data":"19c1066fb3784e6dd833d208071236d5d7dd1e328788c3161a5474ee23d3d7c6"} Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.086424 4827 scope.go:117] "RemoveContainer" containerID="ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.095104 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.113570 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.128783 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.129186 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" containerName="nova-manage" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129208 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" containerName="nova-manage" Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.129229 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="init" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129237 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="init" Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.129249 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-api" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129257 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-api" Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.129272 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-log" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129279 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-log" Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.129307 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="dnsmasq-dns" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129315 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="dnsmasq-dns" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129499 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fee1f20-7ed7-46de-ab65-265fee29ddc4" containerName="dnsmasq-dns" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129520 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-api" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129533 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" containerName="nova-api-log" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.129548 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" containerName="nova-manage" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.130701 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.139623 4827 scope.go:117] "RemoveContainer" containerID="d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.140154 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.140264 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.140375 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.156264 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9\": container with ID starting with d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9 not found: ID does not exist" containerID="d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.156318 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9"} err="failed to get container status \"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9\": rpc error: code = NotFound desc = could not find container \"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9\": container with ID starting with d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9 not found: ID does not exist" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.156346 4827 scope.go:117] "RemoveContainer" containerID="ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.156422 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:56 crc kubenswrapper[4827]: E0126 09:27:56.158979 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f\": container with ID starting with ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f not found: ID does not exist" containerID="ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.159018 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f"} err="failed to get container status \"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f\": rpc error: code = NotFound desc = could not find container \"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f\": container with ID starting with ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f not found: ID does not exist" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.159045 4827 scope.go:117] "RemoveContainer" containerID="d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.159370 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9"} err="failed to get container status \"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9\": rpc error: code = NotFound desc = could not find container \"d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9\": container with ID starting with d832b5843741cf32b201b7dd9e5dfb631bf60225d078305beb2a5b4061f785f9 not found: ID does not exist" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.159403 4827 scope.go:117] "RemoveContainer" containerID="ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.159849 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f"} err="failed to get container status \"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f\": rpc error: code = NotFound desc = could not find container \"ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f\": container with ID starting with ea4f0b29a79a99d0e0e8f75c847e0955987c192dd8020f5f3af7dee6a0aa329f not found: ID does not exist" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230609 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230733 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f509fce4-52e1-4f74-8cfa-cfe156852aed-logs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230796 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-config-data\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230844 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8cg\" (UniqueName: \"kubernetes.io/projected/f509fce4-52e1-4f74-8cfa-cfe156852aed-kube-api-access-mx8cg\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230921 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.230964 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-public-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.332826 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.333410 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-public-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.333536 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.333619 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f509fce4-52e1-4f74-8cfa-cfe156852aed-logs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.333736 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-config-data\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.334391 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8cg\" (UniqueName: \"kubernetes.io/projected/f509fce4-52e1-4f74-8cfa-cfe156852aed-kube-api-access-mx8cg\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.334086 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f509fce4-52e1-4f74-8cfa-cfe156852aed-logs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.337771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-public-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.338960 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.339784 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.339910 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f509fce4-52e1-4f74-8cfa-cfe156852aed-config-data\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.356120 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8cg\" (UniqueName: \"kubernetes.io/projected/f509fce4-52e1-4f74-8cfa-cfe156852aed-kube-api-access-mx8cg\") pod \"nova-api-0\" (UID: \"f509fce4-52e1-4f74-8cfa-cfe156852aed\") " pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.467816 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.471405 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.537979 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle\") pod \"5713d106-6a55-46a6-9e9b-a0f937420e03\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.538907 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fcjl\" (UniqueName: \"kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl\") pod \"5713d106-6a55-46a6-9e9b-a0f937420e03\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.539155 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data\") pod \"5713d106-6a55-46a6-9e9b-a0f937420e03\" (UID: \"5713d106-6a55-46a6-9e9b-a0f937420e03\") " Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.543291 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl" (OuterVolumeSpecName: "kube-api-access-5fcjl") pod "5713d106-6a55-46a6-9e9b-a0f937420e03" (UID: "5713d106-6a55-46a6-9e9b-a0f937420e03"). InnerVolumeSpecName "kube-api-access-5fcjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.595127 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5713d106-6a55-46a6-9e9b-a0f937420e03" (UID: "5713d106-6a55-46a6-9e9b-a0f937420e03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.595205 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data" (OuterVolumeSpecName: "config-data") pod "5713d106-6a55-46a6-9e9b-a0f937420e03" (UID: "5713d106-6a55-46a6-9e9b-a0f937420e03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.643140 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.643177 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5713d106-6a55-46a6-9e9b-a0f937420e03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.643191 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fcjl\" (UniqueName: \"kubernetes.io/projected/5713d106-6a55-46a6-9e9b-a0f937420e03-kube-api-access-5fcjl\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:56 crc kubenswrapper[4827]: I0126 09:27:56.956078 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.060463 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f509fce4-52e1-4f74-8cfa-cfe156852aed","Type":"ContainerStarted","Data":"648af4926a7d28806d3959f55d1cc3cc8dd2263909fa9c0fa823c85d295e1ddb"} Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.063049 4827 generic.go:334] "Generic (PLEG): container finished" podID="5713d106-6a55-46a6-9e9b-a0f937420e03" containerID="a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566" exitCode=0 Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.063086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5713d106-6a55-46a6-9e9b-a0f937420e03","Type":"ContainerDied","Data":"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566"} Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.063116 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5713d106-6a55-46a6-9e9b-a0f937420e03","Type":"ContainerDied","Data":"ef579dffe73f3b536ed53616db2d6641502f95e2daa5bc1d91f902fb72ea9bd3"} Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.063137 4827 scope.go:117] "RemoveContainer" containerID="a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.063168 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.084601 4827 scope.go:117] "RemoveContainer" containerID="a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566" Jan 26 09:27:57 crc kubenswrapper[4827]: E0126 09:27:57.084984 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566\": container with ID starting with a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566 not found: ID does not exist" containerID="a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.085022 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566"} err="failed to get container status \"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566\": rpc error: code = NotFound desc = could not find container \"a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566\": container with ID starting with a9526bf95220d044890a96ec1781e32a49adf2be030cdaa94870f6636b480566 not found: ID does not exist" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.108605 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.120944 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.143228 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: E0126 09:27:57.143682 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5713d106-6a55-46a6-9e9b-a0f937420e03" containerName="nova-scheduler-scheduler" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.143698 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5713d106-6a55-46a6-9e9b-a0f937420e03" containerName="nova-scheduler-scheduler" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.143883 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5713d106-6a55-46a6-9e9b-a0f937420e03" containerName="nova-scheduler-scheduler" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.144443 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.149626 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.153281 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.252240 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.252336 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-config-data\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.252360 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n556x\" (UniqueName: \"kubernetes.io/projected/21741788-081f-4f17-973d-ae145a0469ff-kube-api-access-n556x\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.353822 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.353913 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-config-data\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.353942 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n556x\" (UniqueName: \"kubernetes.io/projected/21741788-081f-4f17-973d-ae145a0469ff-kube-api-access-n556x\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.358838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-config-data\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.359983 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21741788-081f-4f17-973d-ae145a0469ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.371264 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n556x\" (UniqueName: \"kubernetes.io/projected/21741788-081f-4f17-973d-ae145a0469ff-kube-api-access-n556x\") pod \"nova-scheduler-0\" (UID: \"21741788-081f-4f17-973d-ae145a0469ff\") " pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.466532 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.717067 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5713d106-6a55-46a6-9e9b-a0f937420e03" path="/var/lib/kubelet/pods/5713d106-6a55-46a6-9e9b-a0f937420e03/volumes" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.717880 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7ecf0d-6e25-402e-ad0c-536ab9454c18" path="/var/lib/kubelet/pods/ab7ecf0d-6e25-402e-ad0c-536ab9454c18/volumes" Jan 26 09:27:57 crc kubenswrapper[4827]: I0126 09:27:57.886805 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 09:27:57 crc kubenswrapper[4827]: W0126 09:27:57.889967 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21741788_081f_4f17_973d_ae145a0469ff.slice/crio-a41faea7e937e75b7c18b13fa534ee91dca3faeadfff4e072619a8322393212b WatchSource:0}: Error finding container a41faea7e937e75b7c18b13fa534ee91dca3faeadfff4e072619a8322393212b: Status 404 returned error can't find the container with id a41faea7e937e75b7c18b13fa534ee91dca3faeadfff4e072619a8322393212b Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.079106 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21741788-081f-4f17-973d-ae145a0469ff","Type":"ContainerStarted","Data":"e082fe8f13e35d13293720405c01e314d1a99744e0ae99bcfa76a4d576474887"} Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.079426 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21741788-081f-4f17-973d-ae145a0469ff","Type":"ContainerStarted","Data":"a41faea7e937e75b7c18b13fa534ee91dca3faeadfff4e072619a8322393212b"} Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.083991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f509fce4-52e1-4f74-8cfa-cfe156852aed","Type":"ContainerStarted","Data":"f6e280b6071eb94db67294f94f485c3e93c32b2cd67419f206b1b04edc5650a5"} Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.084039 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f509fce4-52e1-4f74-8cfa-cfe156852aed","Type":"ContainerStarted","Data":"296b232946cce3a10397afa7138cd897cf78c0cf8cb4590f00052c93bbe3f723"} Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.096415 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.096395702 podStartE2EDuration="1.096395702s" podCreationTimestamp="2026-01-26 09:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:58.095499938 +0000 UTC m=+1306.744171767" watchObservedRunningTime="2026-01-26 09:27:58.096395702 +0000 UTC m=+1306.745067521" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.132467 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.132444231 podStartE2EDuration="2.132444231s" podCreationTimestamp="2026-01-26 09:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:27:58.117528401 +0000 UTC m=+1306.766200220" watchObservedRunningTime="2026-01-26 09:27:58.132444231 +0000 UTC m=+1306.781116050" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.441141 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:45286->10.217.0.178:8775: read: connection reset by peer" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.441155 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:45284->10.217.0.178:8775: read: connection reset by peer" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.865059 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991077 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfc4z\" (UniqueName: \"kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z\") pod \"16e690cf-71f1-42bc-adb6-acf507816f08\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991164 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data\") pod \"16e690cf-71f1-42bc-adb6-acf507816f08\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991189 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs\") pod \"16e690cf-71f1-42bc-adb6-acf507816f08\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991307 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle\") pod \"16e690cf-71f1-42bc-adb6-acf507816f08\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991384 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs\") pod \"16e690cf-71f1-42bc-adb6-acf507816f08\" (UID: \"16e690cf-71f1-42bc-adb6-acf507816f08\") " Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.991975 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs" (OuterVolumeSpecName: "logs") pod "16e690cf-71f1-42bc-adb6-acf507816f08" (UID: "16e690cf-71f1-42bc-adb6-acf507816f08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:27:58 crc kubenswrapper[4827]: I0126 09:27:58.992409 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e690cf-71f1-42bc-adb6-acf507816f08-logs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:58.996076 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z" (OuterVolumeSpecName: "kube-api-access-gfc4z") pod "16e690cf-71f1-42bc-adb6-acf507816f08" (UID: "16e690cf-71f1-42bc-adb6-acf507816f08"). InnerVolumeSpecName "kube-api-access-gfc4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.019816 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16e690cf-71f1-42bc-adb6-acf507816f08" (UID: "16e690cf-71f1-42bc-adb6-acf507816f08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.038804 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data" (OuterVolumeSpecName: "config-data") pod "16e690cf-71f1-42bc-adb6-acf507816f08" (UID: "16e690cf-71f1-42bc-adb6-acf507816f08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.056203 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "16e690cf-71f1-42bc-adb6-acf507816f08" (UID: "16e690cf-71f1-42bc-adb6-acf507816f08"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.094797 4827 generic.go:334] "Generic (PLEG): container finished" podID="16e690cf-71f1-42bc-adb6-acf507816f08" containerID="31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d" exitCode=0 Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.095726 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096089 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfc4z\" (UniqueName: \"kubernetes.io/projected/16e690cf-71f1-42bc-adb6-acf507816f08-kube-api-access-gfc4z\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096112 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096126 4827 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096138 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e690cf-71f1-42bc-adb6-acf507816f08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerDied","Data":"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d"} Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096570 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16e690cf-71f1-42bc-adb6-acf507816f08","Type":"ContainerDied","Data":"003791cd1ca7cbf16a93926f1ee0de55b6bb0b69941ee419be47a1896f419263"} Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.096593 4827 scope.go:117] "RemoveContainer" containerID="31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.123500 4827 scope.go:117] "RemoveContainer" containerID="aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.136577 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.145761 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.155462 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:59 crc kubenswrapper[4827]: E0126 09:27:59.155913 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.155935 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" Jan 26 09:27:59 crc kubenswrapper[4827]: E0126 09:27:59.155950 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.155958 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.156166 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-metadata" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.156191 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" containerName="nova-metadata-log" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.156850 4827 scope.go:117] "RemoveContainer" containerID="31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.157254 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: E0126 09:27:59.157773 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d\": container with ID starting with 31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d not found: ID does not exist" containerID="31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.157815 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d"} err="failed to get container status \"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d\": rpc error: code = NotFound desc = could not find container \"31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d\": container with ID starting with 31d238d4857af75981ec2f4c9afb983dfc83d4c03c9e2a8b68e6c400871f824d not found: ID does not exist" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.157848 4827 scope.go:117] "RemoveContainer" containerID="aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc" Jan 26 09:27:59 crc kubenswrapper[4827]: E0126 09:27:59.162908 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc\": container with ID starting with aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc not found: ID does not exist" containerID="aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.162956 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc"} err="failed to get container status \"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc\": rpc error: code = NotFound desc = could not find container \"aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc\": container with ID starting with aa8eb1523542b03eeea640b117d0789400bd956643eddcc1fef4bf67271ff3cc not found: ID does not exist" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.163036 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.163478 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.186146 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.302946 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.303038 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.303074 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-config-data\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.303099 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44zk\" (UniqueName: \"kubernetes.io/projected/20987ce4-16e9-4364-9742-44454d336e33-kube-api-access-h44zk\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.303180 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20987ce4-16e9-4364-9742-44454d336e33-logs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.404515 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.404599 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.404673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-config-data\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.404701 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44zk\" (UniqueName: \"kubernetes.io/projected/20987ce4-16e9-4364-9742-44454d336e33-kube-api-access-h44zk\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.404726 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20987ce4-16e9-4364-9742-44454d336e33-logs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.405210 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20987ce4-16e9-4364-9742-44454d336e33-logs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.409032 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.409051 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.410383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20987ce4-16e9-4364-9742-44454d336e33-config-data\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.423893 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44zk\" (UniqueName: \"kubernetes.io/projected/20987ce4-16e9-4364-9742-44454d336e33-kube-api-access-h44zk\") pod \"nova-metadata-0\" (UID: \"20987ce4-16e9-4364-9742-44454d336e33\") " pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.481127 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.714790 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e690cf-71f1-42bc-adb6-acf507816f08" path="/var/lib/kubelet/pods/16e690cf-71f1-42bc-adb6-acf507816f08/volumes" Jan 26 09:27:59 crc kubenswrapper[4827]: W0126 09:27:59.907958 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20987ce4_16e9_4364_9742_44454d336e33.slice/crio-538c789a95edfb5e5480a1d244272a30aa652327d0887afeb83d92d1580a4d8b WatchSource:0}: Error finding container 538c789a95edfb5e5480a1d244272a30aa652327d0887afeb83d92d1580a4d8b: Status 404 returned error can't find the container with id 538c789a95edfb5e5480a1d244272a30aa652327d0887afeb83d92d1580a4d8b Jan 26 09:27:59 crc kubenswrapper[4827]: I0126 09:27:59.910067 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 09:28:00 crc kubenswrapper[4827]: I0126 09:28:00.111169 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20987ce4-16e9-4364-9742-44454d336e33","Type":"ContainerStarted","Data":"c4c2fbc9178e9d47b962edf4a319f776980a638b7e38ede05df5953cf311faca"} Jan 26 09:28:00 crc kubenswrapper[4827]: I0126 09:28:00.111595 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20987ce4-16e9-4364-9742-44454d336e33","Type":"ContainerStarted","Data":"538c789a95edfb5e5480a1d244272a30aa652327d0887afeb83d92d1580a4d8b"} Jan 26 09:28:01 crc kubenswrapper[4827]: I0126 09:28:01.127697 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20987ce4-16e9-4364-9742-44454d336e33","Type":"ContainerStarted","Data":"70ff1557a6f160937efbd91828abd4049d67c3bb69127a76f31530a53781744e"} Jan 26 09:28:01 crc kubenswrapper[4827]: I0126 09:28:01.149658 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.149617449 podStartE2EDuration="2.149617449s" podCreationTimestamp="2026-01-26 09:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:28:01.148263161 +0000 UTC m=+1309.796934980" watchObservedRunningTime="2026-01-26 09:28:01.149617449 +0000 UTC m=+1309.798289268" Jan 26 09:28:02 crc kubenswrapper[4827]: I0126 09:28:02.466708 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 09:28:03 crc kubenswrapper[4827]: E0126 09:28:03.397206 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dc3b7b_13c0_4e66_abc8_b450be207a11.slice/crio-conmon-691d6047ae44e1f0bff8235bd15ebb11c322aa76acd690b276a9890cc900e15b.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:28:04 crc kubenswrapper[4827]: I0126 09:28:04.482097 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:28:04 crc kubenswrapper[4827]: I0126 09:28:04.482157 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 09:28:06 crc kubenswrapper[4827]: I0126 09:28:06.471555 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:28:06 crc kubenswrapper[4827]: I0126 09:28:06.471632 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 09:28:07 crc kubenswrapper[4827]: I0126 09:28:07.466901 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 09:28:07 crc kubenswrapper[4827]: I0126 09:28:07.488013 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f509fce4-52e1-4f74-8cfa-cfe156852aed" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 09:28:07 crc kubenswrapper[4827]: I0126 09:28:07.488050 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f509fce4-52e1-4f74-8cfa-cfe156852aed" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 09:28:07 crc kubenswrapper[4827]: I0126 09:28:07.523045 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 09:28:08 crc kubenswrapper[4827]: I0126 09:28:08.219819 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 09:28:09 crc kubenswrapper[4827]: I0126 09:28:09.482312 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 09:28:09 crc kubenswrapper[4827]: I0126 09:28:09.482376 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 09:28:10 crc kubenswrapper[4827]: I0126 09:28:10.499889 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="20987ce4-16e9-4364-9742-44454d336e33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 09:28:10 crc kubenswrapper[4827]: I0126 09:28:10.500243 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="20987ce4-16e9-4364-9742-44454d336e33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 09:28:12 crc kubenswrapper[4827]: I0126 09:28:12.269444 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:28:12 crc kubenswrapper[4827]: I0126 09:28:12.269798 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:28:14 crc kubenswrapper[4827]: I0126 09:28:14.624666 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 09:28:16 crc kubenswrapper[4827]: I0126 09:28:16.485656 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 09:28:16 crc kubenswrapper[4827]: I0126 09:28:16.486365 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 09:28:16 crc kubenswrapper[4827]: I0126 09:28:16.492703 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 09:28:16 crc kubenswrapper[4827]: I0126 09:28:16.493903 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 09:28:17 crc kubenswrapper[4827]: I0126 09:28:17.271935 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 09:28:17 crc kubenswrapper[4827]: I0126 09:28:17.279743 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 09:28:19 crc kubenswrapper[4827]: I0126 09:28:19.489078 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 09:28:19 crc kubenswrapper[4827]: I0126 09:28:19.493591 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 09:28:19 crc kubenswrapper[4827]: I0126 09:28:19.494273 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 09:28:20 crc kubenswrapper[4827]: I0126 09:28:20.303829 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 09:28:28 crc kubenswrapper[4827]: I0126 09:28:28.634257 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:30 crc kubenswrapper[4827]: I0126 09:28:30.036360 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:33 crc kubenswrapper[4827]: I0126 09:28:33.349733 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="rabbitmq" containerID="cri-o://1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99" gracePeriod=604796 Jan 26 09:28:34 crc kubenswrapper[4827]: I0126 09:28:34.869306 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="rabbitmq" containerID="cri-o://358f813ad5378dbe46f27f966c54ce92ed4b5421cdbfbedcab4e4c646d8d3c43" gracePeriod=604796 Jan 26 09:28:37 crc kubenswrapper[4827]: I0126 09:28:37.163167 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 26 09:28:37 crc kubenswrapper[4827]: I0126 09:28:37.513584 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 26 09:28:39 crc kubenswrapper[4827]: I0126 09:28:39.919827 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084205 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084268 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4qwb\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084292 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084336 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084407 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084435 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084478 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084518 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084535 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084550 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084574 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd\") pod \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\" (UID: \"6aa4b7d1-606d-4833-9b9c-a2c78297c312\") " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084887 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.084905 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.085158 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.085196 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.092140 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb" (OuterVolumeSpecName: "kube-api-access-p4qwb") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "kube-api-access-p4qwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.093021 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.093053 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.093326 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.104896 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.114573 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info" (OuterVolumeSpecName: "pod-info") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.126263 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data" (OuterVolumeSpecName: "config-data") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.187504 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.187796 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.187875 4827 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6aa4b7d1-606d-4833-9b9c-a2c78297c312-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.187947 4827 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6aa4b7d1-606d-4833-9b9c-a2c78297c312-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.188012 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.188078 4827 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.188150 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4qwb\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-kube-api-access-p4qwb\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.212087 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf" (OuterVolumeSpecName: "server-conf") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.214953 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.248479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6aa4b7d1-606d-4833-9b9c-a2c78297c312" (UID: "6aa4b7d1-606d-4833-9b9c-a2c78297c312"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.289974 4827 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6aa4b7d1-606d-4833-9b9c-a2c78297c312-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.290017 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.290029 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6aa4b7d1-606d-4833-9b9c-a2c78297c312-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.507739 4827 generic.go:334] "Generic (PLEG): container finished" podID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerID="1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99" exitCode=0 Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.507793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerDied","Data":"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99"} Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.507839 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.507858 4827 scope.go:117] "RemoveContainer" containerID="1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.507840 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6aa4b7d1-606d-4833-9b9c-a2c78297c312","Type":"ContainerDied","Data":"f938ab181dd124c1c3a05206fe99758a4e9a8b64d2e6d618095924ac7ae7ec9d"} Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.530050 4827 scope.go:117] "RemoveContainer" containerID="bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.544987 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.554376 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.565524 4827 scope.go:117] "RemoveContainer" containerID="1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99" Jan 26 09:28:40 crc kubenswrapper[4827]: E0126 09:28:40.616057 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99\": container with ID starting with 1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99 not found: ID does not exist" containerID="1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.616105 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99"} err="failed to get container status \"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99\": rpc error: code = NotFound desc = could not find container \"1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99\": container with ID starting with 1026ef7b472222bf899165faa0dde69ce995976697531c187e8a3ed1e0a9cd99 not found: ID does not exist" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.616143 4827 scope.go:117] "RemoveContainer" containerID="bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842" Jan 26 09:28:40 crc kubenswrapper[4827]: E0126 09:28:40.617673 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842\": container with ID starting with bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842 not found: ID does not exist" containerID="bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.617702 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842"} err="failed to get container status \"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842\": rpc error: code = NotFound desc = could not find container \"bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842\": container with ID starting with bfe16dcb8880d5ac87521ff2ea764dd5c27f1ef810e9e089f55d0644414e5842 not found: ID does not exist" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.623842 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:40 crc kubenswrapper[4827]: E0126 09:28:40.624895 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="rabbitmq" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.624916 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="rabbitmq" Jan 26 09:28:40 crc kubenswrapper[4827]: E0126 09:28:40.624940 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="setup-container" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.624948 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="setup-container" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.626380 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" containerName="rabbitmq" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.631426 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.637004 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.637266 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.638238 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.638383 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ghkwb" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.638771 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.638957 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.639087 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.672289 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.818894 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.818937 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.818967 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d4c7e4-4f6a-402c-af73-84404c567c53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819012 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819060 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d4c7e4-4f6a-402c-af73-84404c567c53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819078 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819135 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcsz\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-kube-api-access-2wcsz\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.819205 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920130 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d4c7e4-4f6a-402c-af73-84404c567c53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920234 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920266 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d4c7e4-4f6a-402c-af73-84404c567c53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920404 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920683 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920707 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920728 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wcsz\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-kube-api-access-2wcsz\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920809 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920810 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920826 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920841 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.920970 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.921221 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.921506 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.921702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.922697 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d4c7e4-4f6a-402c-af73-84404c567c53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.923308 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d4c7e4-4f6a-402c-af73-84404c567c53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.925202 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.925350 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.926089 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d4c7e4-4f6a-402c-af73-84404c567c53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.955856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wcsz\" (UniqueName: \"kubernetes.io/projected/d2d4c7e4-4f6a-402c-af73-84404c567c53-kube-api-access-2wcsz\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.966728 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d4c7e4-4f6a-402c-af73-84404c567c53\") " pod="openstack/rabbitmq-server-0" Jan 26 09:28:40 crc kubenswrapper[4827]: I0126 09:28:40.978046 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.594313 4827 generic.go:334] "Generic (PLEG): container finished" podID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerID="358f813ad5378dbe46f27f966c54ce92ed4b5421cdbfbedcab4e4c646d8d3c43" exitCode=0 Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.594833 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerDied","Data":"358f813ad5378dbe46f27f966c54ce92ed4b5421cdbfbedcab4e4c646d8d3c43"} Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.626706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.717459 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa4b7d1-606d-4833-9b9c-a2c78297c312" path="/var/lib/kubelet/pods/6aa4b7d1-606d-4833-9b9c-a2c78297c312/volumes" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.782086 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942244 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942310 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942347 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942383 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942408 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942463 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942559 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgmk\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942582 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942605 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.942628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins\") pod \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\" (UID: \"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b\") " Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.943227 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.944149 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.948259 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.954966 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.956211 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info" (OuterVolumeSpecName: "pod-info") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.956231 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.958052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.959889 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk" (OuterVolumeSpecName: "kube-api-access-bqgmk") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "kube-api-access-bqgmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:41 crc kubenswrapper[4827]: I0126 09:28:41.983275 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data" (OuterVolumeSpecName: "config-data") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.015276 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf" (OuterVolumeSpecName: "server-conf") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044042 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044070 4827 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044080 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044090 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgmk\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-kube-api-access-bqgmk\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044116 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044126 4827 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044135 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044143 4827 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044152 4827 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.044159 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.048520 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" (UID: "6cc01e51-9c3e-42ad-9ba6-11ad80b8366b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.060825 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.145557 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.145584 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.268686 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.268758 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.268807 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.269579 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:28:42 crc kubenswrapper[4827]: I0126 09:28:42.269708 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1" gracePeriod=600 Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.208079 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.208104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6cc01e51-9c3e-42ad-9ba6-11ad80b8366b","Type":"ContainerDied","Data":"46663b269d0c3c9d1d09d24e5cdd2bec71e0b25d2f6d4b2547643a48d278c4c4"} Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.208442 4827 scope.go:117] "RemoveContainer" containerID="358f813ad5378dbe46f27f966c54ce92ed4b5421cdbfbedcab4e4c646d8d3c43" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.210159 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d4c7e4-4f6a-402c-af73-84404c567c53","Type":"ContainerStarted","Data":"d984d78ce5f2bbf4571f9bcef29102c0e6a6e3430e25a218e75ebabe65a49c4b"} Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.252411 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.256490 4827 scope.go:117] "RemoveContainer" containerID="101e5bc11ac55692fe30de69f45ef6185d49a52aaaf7e4c37c5ebe506a0a1297" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.259422 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.288599 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:43 crc kubenswrapper[4827]: E0126 09:28:43.289213 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="rabbitmq" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.289246 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="rabbitmq" Jan 26 09:28:43 crc kubenswrapper[4827]: E0126 09:28:43.289266 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="setup-container" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.289313 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="setup-container" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.289651 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" containerName="rabbitmq" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.290564 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.293115 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vt4gm" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.293520 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.293703 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.293893 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.294037 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.294058 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.294325 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.313070 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.467618 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.467885 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.467919 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.467941 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.467972 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf9t2\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-kube-api-access-kf9t2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468025 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468043 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468083 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468102 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.468155 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578330 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578403 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578440 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf9t2\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-kube-api-access-kf9t2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578485 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578568 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578589 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578611 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578698 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.578768 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.584907 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.585576 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.585850 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.586087 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.591981 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.594295 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.596133 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.596596 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.596608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.602270 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.609156 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf9t2\" (UniqueName: \"kubernetes.io/projected/a1cc30a0-73e5-4ffe-97c4-37779ea46d78-kube-api-access-kf9t2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.632687 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1cc30a0-73e5-4ffe-97c4-37779ea46d78\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.713164 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc01e51-9c3e-42ad-9ba6-11ad80b8366b" path="/var/lib/kubelet/pods/6cc01e51-9c3e-42ad-9ba6-11ad80b8366b/volumes" Jan 26 09:28:43 crc kubenswrapper[4827]: I0126 09:28:43.717612 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.138823 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.140692 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.152840 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.162110 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.229531 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d4c7e4-4f6a-402c-af73-84404c567c53","Type":"ContainerStarted","Data":"efd31d76bf6a4d962b28e743de8b518e28546374fdef05b123b982c5d4f72d25"} Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.251766 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1" exitCode=0 Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.251876 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1"} Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.251908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702"} Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.251926 4827 scope.go:117] "RemoveContainer" containerID="a09649e50cc8f80c7bffb7ba2008e8c39022bbecc6b9368348ffba77350e153d" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.270509 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.298276 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.298473 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.298567 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.298758 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbpx\" (UniqueName: \"kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.298893 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.299035 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.400412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.400918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.401618 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.401677 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.406988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.407125 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.407198 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.407325 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbpx\" (UniqueName: \"kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.407919 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.408180 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.408547 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.427371 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbpx\" (UniqueName: \"kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx\") pod \"dnsmasq-dns-74d67684c-59pb6\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.469706 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:44 crc kubenswrapper[4827]: I0126 09:28:44.920847 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:28:45 crc kubenswrapper[4827]: I0126 09:28:45.277575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1cc30a0-73e5-4ffe-97c4-37779ea46d78","Type":"ContainerStarted","Data":"6480f25807f76bfa6e1997c794e0d108afb81529b9331ce0bf2284588ffbbaee"} Jan 26 09:28:45 crc kubenswrapper[4827]: I0126 09:28:45.283840 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-59pb6" event={"ID":"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f","Type":"ContainerStarted","Data":"09ebe10a93e3c961d3c0a743beee8a10ab208b90591ae1d205b32575db5dff25"} Jan 26 09:28:46 crc kubenswrapper[4827]: I0126 09:28:46.297107 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerID="79ae96e6db6838e24b5e0e2f27793dd952caff78ef157176c5a075f30ee14ceb" exitCode=0 Jan 26 09:28:46 crc kubenswrapper[4827]: I0126 09:28:46.297184 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-59pb6" event={"ID":"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f","Type":"ContainerDied","Data":"79ae96e6db6838e24b5e0e2f27793dd952caff78ef157176c5a075f30ee14ceb"} Jan 26 09:28:46 crc kubenswrapper[4827]: I0126 09:28:46.300417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1cc30a0-73e5-4ffe-97c4-37779ea46d78","Type":"ContainerStarted","Data":"64e1cb1d303e342b8521817eaf76e7ad4abe8bf3e0616a50bc9b6608f59bac37"} Jan 26 09:28:47 crc kubenswrapper[4827]: I0126 09:28:47.313244 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-59pb6" event={"ID":"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f","Type":"ContainerStarted","Data":"2f2f92b64689243440f4f998440a546fbaa6d931058f4ebcf71788fbfb2059f6"} Jan 26 09:28:47 crc kubenswrapper[4827]: I0126 09:28:47.337515 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74d67684c-59pb6" podStartSLOduration=3.337497185 podStartE2EDuration="3.337497185s" podCreationTimestamp="2026-01-26 09:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:28:47.332813907 +0000 UTC m=+1355.981485726" watchObservedRunningTime="2026-01-26 09:28:47.337497185 +0000 UTC m=+1355.986169004" Jan 26 09:28:48 crc kubenswrapper[4827]: I0126 09:28:48.323283 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.471759 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.530595 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.530888 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="dnsmasq-dns" containerID="cri-o://e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4" gracePeriod=10 Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.683732 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 09:28:54 crc kubenswrapper[4827]: E0126 09:28:54.684164 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66959c4_eb10_4fe0_ba7d_ac0f1c3c1baa.slice/crio-conmon-e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.685919 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.709253 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.831478 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skn4c\" (UniqueName: \"kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.831589 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.831821 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.831943 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.832229 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.832294 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935022 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935361 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935392 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935471 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935491 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.935525 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skn4c\" (UniqueName: \"kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.936730 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.937355 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.942809 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.942980 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.943237 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:54 crc kubenswrapper[4827]: I0126 09:28:54.980798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skn4c\" (UniqueName: \"kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c\") pod \"dnsmasq-dns-567fc67579-hw9ld\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.023475 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.071468 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.140229 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb\") pod \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.140308 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wds2\" (UniqueName: \"kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2\") pod \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.140355 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc\") pod \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.140521 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config\") pod \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.140596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb\") pod \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\" (UID: \"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa\") " Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.169713 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2" (OuterVolumeSpecName: "kube-api-access-6wds2") pod "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" (UID: "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa"). InnerVolumeSpecName "kube-api-access-6wds2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.199038 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" (UID: "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.204940 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" (UID: "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.209581 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" (UID: "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.249876 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.249921 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.249935 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wds2\" (UniqueName: \"kubernetes.io/projected/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-kube-api-access-6wds2\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.249949 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.274774 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config" (OuterVolumeSpecName: "config") pod "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" (UID: "e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.351311 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.380407 4827 generic.go:334] "Generic (PLEG): container finished" podID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerID="e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4" exitCode=0 Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.380445 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" event={"ID":"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa","Type":"ContainerDied","Data":"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4"} Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.380474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" event={"ID":"e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa","Type":"ContainerDied","Data":"33a1eac3a0c006b4224e8b39f3760b389df4699382949198427f1c098da71a94"} Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.380492 4827 scope.go:117] "RemoveContainer" containerID="e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.380492 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-clzww" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.398343 4827 scope.go:117] "RemoveContainer" containerID="4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.418034 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.429122 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-clzww"] Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.431489 4827 scope.go:117] "RemoveContainer" containerID="e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4" Jan 26 09:28:55 crc kubenswrapper[4827]: E0126 09:28:55.431920 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4\": container with ID starting with e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4 not found: ID does not exist" containerID="e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.431945 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4"} err="failed to get container status \"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4\": rpc error: code = NotFound desc = could not find container \"e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4\": container with ID starting with e26a8092dbdfbab2828edf5aff8154de9c94b1d482156f02b63ec0884aa822b4 not found: ID does not exist" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.431963 4827 scope.go:117] "RemoveContainer" containerID="4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628" Jan 26 09:28:55 crc kubenswrapper[4827]: E0126 09:28:55.432275 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628\": container with ID starting with 4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628 not found: ID does not exist" containerID="4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.432296 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628"} err="failed to get container status \"4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628\": rpc error: code = NotFound desc = could not find container \"4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628\": container with ID starting with 4426498dff45f6321811434883216ffc0917be800f42afc73659526373d90628 not found: ID does not exist" Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.528069 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 09:28:55 crc kubenswrapper[4827]: I0126 09:28:55.715326 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" path="/var/lib/kubelet/pods/e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa/volumes" Jan 26 09:28:56 crc kubenswrapper[4827]: I0126 09:28:56.389417 4827 generic.go:334] "Generic (PLEG): container finished" podID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerID="383970e8d1add77ef8bf63f2007dd12ca812bf38bda27cb82733c17026ea5421" exitCode=0 Jan 26 09:28:56 crc kubenswrapper[4827]: I0126 09:28:56.389487 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" event={"ID":"db1d206a-dafd-4abb-8163-8865e5ebdcd6","Type":"ContainerDied","Data":"383970e8d1add77ef8bf63f2007dd12ca812bf38bda27cb82733c17026ea5421"} Jan 26 09:28:56 crc kubenswrapper[4827]: I0126 09:28:56.389898 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" event={"ID":"db1d206a-dafd-4abb-8163-8865e5ebdcd6","Type":"ContainerStarted","Data":"2560fa89255eafa503a5fc09b34b962bc225375c9071973041762f17b5dfdffa"} Jan 26 09:28:57 crc kubenswrapper[4827]: I0126 09:28:57.402838 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" event={"ID":"db1d206a-dafd-4abb-8163-8865e5ebdcd6","Type":"ContainerStarted","Data":"ea3e157230cdc6d05efde7611eb7ba5b80d4f4c7d51b64082dbf18e92aa450d9"} Jan 26 09:28:57 crc kubenswrapper[4827]: I0126 09:28:57.403354 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:28:57 crc kubenswrapper[4827]: I0126 09:28:57.429280 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" podStartSLOduration=3.42926252 podStartE2EDuration="3.42926252s" podCreationTimestamp="2026-01-26 09:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:28:57.42707203 +0000 UTC m=+1366.075743919" watchObservedRunningTime="2026-01-26 09:28:57.42926252 +0000 UTC m=+1366.077934339" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.025842 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.092564 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.095961 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74d67684c-59pb6" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="dnsmasq-dns" containerID="cri-o://2f2f92b64689243440f4f998440a546fbaa6d931058f4ebcf71788fbfb2059f6" gracePeriod=10 Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.502835 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerID="2f2f92b64689243440f4f998440a546fbaa6d931058f4ebcf71788fbfb2059f6" exitCode=0 Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.503352 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-59pb6" event={"ID":"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f","Type":"ContainerDied","Data":"2f2f92b64689243440f4f998440a546fbaa6d931058f4ebcf71788fbfb2059f6"} Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.661105 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747061 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747153 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747190 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbpx\" (UniqueName: \"kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747209 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747234 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.747366 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam\") pod \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\" (UID: \"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f\") " Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.762982 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx" (OuterVolumeSpecName: "kube-api-access-5mbpx") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "kube-api-access-5mbpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.815415 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.820338 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.849981 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbpx\" (UniqueName: \"kubernetes.io/projected/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-kube-api-access-5mbpx\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.850011 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.850021 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.851413 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.851843 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.885068 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config" (OuterVolumeSpecName: "config") pod "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" (UID: "d7f2d3f8-e11a-47b2-a050-6ecaefb4680f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.952171 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.952215 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:05 crc kubenswrapper[4827]: I0126 09:29:05.952225 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.515550 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-59pb6" event={"ID":"d7f2d3f8-e11a-47b2-a050-6ecaefb4680f","Type":"ContainerDied","Data":"09ebe10a93e3c961d3c0a743beee8a10ab208b90591ae1d205b32575db5dff25"} Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.516842 4827 scope.go:117] "RemoveContainer" containerID="2f2f92b64689243440f4f998440a546fbaa6d931058f4ebcf71788fbfb2059f6" Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.515646 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-59pb6" Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.545406 4827 scope.go:117] "RemoveContainer" containerID="79ae96e6db6838e24b5e0e2f27793dd952caff78ef157176c5a075f30ee14ceb" Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.556779 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:29:06 crc kubenswrapper[4827]: I0126 09:29:06.565315 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-59pb6"] Jan 26 09:29:07 crc kubenswrapper[4827]: I0126 09:29:07.712037 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" path="/var/lib/kubelet/pods/d7f2d3f8-e11a-47b2-a050-6ecaefb4680f/volumes" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.257278 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65"] Jan 26 09:29:15 crc kubenswrapper[4827]: E0126 09:29:15.258185 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="init" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258197 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="init" Jan 26 09:29:15 crc kubenswrapper[4827]: E0126 09:29:15.258206 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="init" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258211 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="init" Jan 26 09:29:15 crc kubenswrapper[4827]: E0126 09:29:15.258219 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258225 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: E0126 09:29:15.258237 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258243 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258416 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f2d3f8-e11a-47b2-a050-6ecaefb4680f" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.258440 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66959c4-eb10-4fe0-ba7d-ac0f1c3c1baa" containerName="dnsmasq-dns" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.259014 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.267838 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.268348 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.268403 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.268488 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.271620 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65"] Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.322826 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.323086 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldng\" (UniqueName: \"kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.323336 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.323445 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.426143 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qldng\" (UniqueName: \"kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.426237 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.426296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.426456 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.435399 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.439070 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.440736 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.451505 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qldng\" (UniqueName: \"kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.577740 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.600677 4827 generic.go:334] "Generic (PLEG): container finished" podID="d2d4c7e4-4f6a-402c-af73-84404c567c53" containerID="efd31d76bf6a4d962b28e743de8b518e28546374fdef05b123b982c5d4f72d25" exitCode=0 Jan 26 09:29:15 crc kubenswrapper[4827]: I0126 09:29:15.600727 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d4c7e4-4f6a-402c-af73-84404c567c53","Type":"ContainerDied","Data":"efd31d76bf6a4d962b28e743de8b518e28546374fdef05b123b982c5d4f72d25"} Jan 26 09:29:16 crc kubenswrapper[4827]: W0126 09:29:16.143767 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f8f40b_ea88_4069_8e6c_f1729de76b8a.slice/crio-6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f WatchSource:0}: Error finding container 6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f: Status 404 returned error can't find the container with id 6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f Jan 26 09:29:16 crc kubenswrapper[4827]: I0126 09:29:16.146068 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:29:16 crc kubenswrapper[4827]: I0126 09:29:16.146410 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65"] Jan 26 09:29:16 crc kubenswrapper[4827]: I0126 09:29:16.612203 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" event={"ID":"90f8f40b-ea88-4069-8e6c-f1729de76b8a","Type":"ContainerStarted","Data":"6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f"} Jan 26 09:29:16 crc kubenswrapper[4827]: I0126 09:29:16.614044 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d4c7e4-4f6a-402c-af73-84404c567c53","Type":"ContainerStarted","Data":"2a61f9d381989e34832b9de68c0710a7395b7714015ce225f94bc50694f8422d"} Jan 26 09:29:16 crc kubenswrapper[4827]: I0126 09:29:16.614761 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 09:29:18 crc kubenswrapper[4827]: I0126 09:29:18.645408 4827 generic.go:334] "Generic (PLEG): container finished" podID="a1cc30a0-73e5-4ffe-97c4-37779ea46d78" containerID="64e1cb1d303e342b8521817eaf76e7ad4abe8bf3e0616a50bc9b6608f59bac37" exitCode=0 Jan 26 09:29:18 crc kubenswrapper[4827]: I0126 09:29:18.645822 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1cc30a0-73e5-4ffe-97c4-37779ea46d78","Type":"ContainerDied","Data":"64e1cb1d303e342b8521817eaf76e7ad4abe8bf3e0616a50bc9b6608f59bac37"} Jan 26 09:29:18 crc kubenswrapper[4827]: I0126 09:29:18.674667 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.67463568 podStartE2EDuration="38.67463568s" podCreationTimestamp="2026-01-26 09:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:29:16.650661951 +0000 UTC m=+1385.299333770" watchObservedRunningTime="2026-01-26 09:29:18.67463568 +0000 UTC m=+1387.323307489" Jan 26 09:29:20 crc kubenswrapper[4827]: I0126 09:29:20.670064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1cc30a0-73e5-4ffe-97c4-37779ea46d78","Type":"ContainerStarted","Data":"a54c6319b97a134f20a1e0e650fa842de3ef3579ec30f59c6d5feba2959b8cb1"} Jan 26 09:29:20 crc kubenswrapper[4827]: I0126 09:29:20.670829 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:29:20 crc kubenswrapper[4827]: I0126 09:29:20.709723 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.709699603 podStartE2EDuration="37.709699603s" podCreationTimestamp="2026-01-26 09:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:29:20.700050538 +0000 UTC m=+1389.348722357" watchObservedRunningTime="2026-01-26 09:29:20.709699603 +0000 UTC m=+1389.358371432" Jan 26 09:29:29 crc kubenswrapper[4827]: I0126 09:29:29.782426 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" event={"ID":"90f8f40b-ea88-4069-8e6c-f1729de76b8a","Type":"ContainerStarted","Data":"cb5c94c31aaa9e0c26c141ef2f47551bb7994a9fafc1183daff236c74b7a472d"} Jan 26 09:29:29 crc kubenswrapper[4827]: I0126 09:29:29.804922 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" podStartSLOduration=1.567743136 podStartE2EDuration="14.804906995s" podCreationTimestamp="2026-01-26 09:29:15 +0000 UTC" firstStartedPulling="2026-01-26 09:29:16.145833535 +0000 UTC m=+1384.794505354" lastFinishedPulling="2026-01-26 09:29:29.382997394 +0000 UTC m=+1398.031669213" observedRunningTime="2026-01-26 09:29:29.803313151 +0000 UTC m=+1398.451984970" watchObservedRunningTime="2026-01-26 09:29:29.804906995 +0000 UTC m=+1398.453578814" Jan 26 09:29:30 crc kubenswrapper[4827]: I0126 09:29:30.984995 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 09:29:33 crc kubenswrapper[4827]: I0126 09:29:33.721880 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 09:29:43 crc kubenswrapper[4827]: I0126 09:29:43.893595 4827 generic.go:334] "Generic (PLEG): container finished" podID="90f8f40b-ea88-4069-8e6c-f1729de76b8a" containerID="cb5c94c31aaa9e0c26c141ef2f47551bb7994a9fafc1183daff236c74b7a472d" exitCode=0 Jan 26 09:29:43 crc kubenswrapper[4827]: I0126 09:29:43.893775 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" event={"ID":"90f8f40b-ea88-4069-8e6c-f1729de76b8a","Type":"ContainerDied","Data":"cb5c94c31aaa9e0c26c141ef2f47551bb7994a9fafc1183daff236c74b7a472d"} Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.337672 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.449944 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qldng\" (UniqueName: \"kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng\") pod \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.450170 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory\") pod \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.450262 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam\") pod \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.450305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle\") pod \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\" (UID: \"90f8f40b-ea88-4069-8e6c-f1729de76b8a\") " Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.457827 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "90f8f40b-ea88-4069-8e6c-f1729de76b8a" (UID: "90f8f40b-ea88-4069-8e6c-f1729de76b8a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.470819 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng" (OuterVolumeSpecName: "kube-api-access-qldng") pod "90f8f40b-ea88-4069-8e6c-f1729de76b8a" (UID: "90f8f40b-ea88-4069-8e6c-f1729de76b8a"). InnerVolumeSpecName "kube-api-access-qldng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.484740 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90f8f40b-ea88-4069-8e6c-f1729de76b8a" (UID: "90f8f40b-ea88-4069-8e6c-f1729de76b8a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.484795 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory" (OuterVolumeSpecName: "inventory") pod "90f8f40b-ea88-4069-8e6c-f1729de76b8a" (UID: "90f8f40b-ea88-4069-8e6c-f1729de76b8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.552217 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.552253 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.552265 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f8f40b-ea88-4069-8e6c-f1729de76b8a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.552274 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qldng\" (UniqueName: \"kubernetes.io/projected/90f8f40b-ea88-4069-8e6c-f1729de76b8a-kube-api-access-qldng\") on node \"crc\" DevicePath \"\"" Jan 26 09:29:45 crc kubenswrapper[4827]: E0126 09:29:45.860320 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f8f40b_ea88_4069_8e6c_f1729de76b8a.slice\": RecentStats: unable to find data in memory cache]" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.910295 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" event={"ID":"90f8f40b-ea88-4069-8e6c-f1729de76b8a","Type":"ContainerDied","Data":"6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f"} Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.910522 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c75c494b890b41351d0617a8cec4ff79865f6d66c1a546f53159173f4a6237f" Jan 26 09:29:45 crc kubenswrapper[4827]: I0126 09:29:45.910345 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.000410 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5"] Jan 26 09:29:46 crc kubenswrapper[4827]: E0126 09:29:46.001108 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f8f40b-ea88-4069-8e6c-f1729de76b8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.001179 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f8f40b-ea88-4069-8e6c-f1729de76b8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.001386 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f8f40b-ea88-4069-8e6c-f1729de76b8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.001968 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.008711 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.008962 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.009096 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.009095 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.021242 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5"] Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.160019 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.160129 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.160169 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.160206 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6x9\" (UniqueName: \"kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.261893 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.261961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6x9\" (UniqueName: \"kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.262049 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.262080 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.270329 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.271105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.271595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.279804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6x9\" (UniqueName: \"kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.316610 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:29:46 crc kubenswrapper[4827]: I0126 09:29:46.994861 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5"] Jan 26 09:29:46 crc kubenswrapper[4827]: W0126 09:29:46.998779 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99a53d5_61a2_4ef0_b0fc_efc0dbabcbb8.slice/crio-571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f WatchSource:0}: Error finding container 571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f: Status 404 returned error can't find the container with id 571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.324884 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.327072 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.342591 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.482052 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.482113 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvv9\" (UniqueName: \"kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.482197 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.583938 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.583990 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffvv9\" (UniqueName: \"kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.584023 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.584377 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.584425 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.606965 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffvv9\" (UniqueName: \"kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9\") pod \"redhat-operators-xlspb\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.668824 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.944626 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.945100 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" event={"ID":"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8","Type":"ContainerStarted","Data":"12aab131f1dc47605dbbc80b0ed0f97ba005b574ec162eed75a9bb3250d950f1"} Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.945128 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" event={"ID":"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8","Type":"ContainerStarted","Data":"571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f"} Jan 26 09:29:47 crc kubenswrapper[4827]: I0126 09:29:47.970175 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" podStartSLOduration=2.519233783 podStartE2EDuration="2.9701555s" podCreationTimestamp="2026-01-26 09:29:45 +0000 UTC" firstStartedPulling="2026-01-26 09:29:47.002510022 +0000 UTC m=+1415.651181841" lastFinishedPulling="2026-01-26 09:29:47.453431739 +0000 UTC m=+1416.102103558" observedRunningTime="2026-01-26 09:29:47.96212124 +0000 UTC m=+1416.610793059" watchObservedRunningTime="2026-01-26 09:29:47.9701555 +0000 UTC m=+1416.618827319" Jan 26 09:29:48 crc kubenswrapper[4827]: I0126 09:29:48.956868 4827 generic.go:334] "Generic (PLEG): container finished" podID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerID="720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832" exitCode=0 Jan 26 09:29:48 crc kubenswrapper[4827]: I0126 09:29:48.956921 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerDied","Data":"720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832"} Jan 26 09:29:48 crc kubenswrapper[4827]: I0126 09:29:48.956967 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerStarted","Data":"e297d2d53269641fb41752c04f51f08fbb7708b7dde2a7c5164b1d68989377c2"} Jan 26 09:29:49 crc kubenswrapper[4827]: I0126 09:29:49.974078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerStarted","Data":"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5"} Jan 26 09:29:53 crc kubenswrapper[4827]: I0126 09:29:52.999616 4827 generic.go:334] "Generic (PLEG): container finished" podID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerID="fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5" exitCode=0 Jan 26 09:29:53 crc kubenswrapper[4827]: I0126 09:29:52.999703 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerDied","Data":"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5"} Jan 26 09:29:54 crc kubenswrapper[4827]: I0126 09:29:54.009834 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerStarted","Data":"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0"} Jan 26 09:29:54 crc kubenswrapper[4827]: I0126 09:29:54.033663 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xlspb" podStartSLOduration=2.548229601 podStartE2EDuration="7.033627306s" podCreationTimestamp="2026-01-26 09:29:47 +0000 UTC" firstStartedPulling="2026-01-26 09:29:48.958587949 +0000 UTC m=+1417.607259768" lastFinishedPulling="2026-01-26 09:29:53.443985644 +0000 UTC m=+1422.092657473" observedRunningTime="2026-01-26 09:29:54.029466761 +0000 UTC m=+1422.678138580" watchObservedRunningTime="2026-01-26 09:29:54.033627306 +0000 UTC m=+1422.682299115" Jan 26 09:29:57 crc kubenswrapper[4827]: I0126 09:29:57.668972 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:57 crc kubenswrapper[4827]: I0126 09:29:57.670076 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:29:58 crc kubenswrapper[4827]: I0126 09:29:58.738448 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xlspb" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="registry-server" probeResult="failure" output=< Jan 26 09:29:58 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:29:58 crc kubenswrapper[4827]: > Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.148311 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km"] Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.149754 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.153282 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.161340 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km"] Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.165459 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.314566 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.314939 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qdm\" (UniqueName: \"kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.315108 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.417730 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.416629 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.417848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6qdm\" (UniqueName: \"kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.418306 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.425420 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.437530 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6qdm\" (UniqueName: \"kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm\") pod \"collect-profiles-29490330-q78km\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:00 crc kubenswrapper[4827]: I0126 09:30:00.469831 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:01 crc kubenswrapper[4827]: I0126 09:30:01.078434 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km"] Jan 26 09:30:02 crc kubenswrapper[4827]: I0126 09:30:02.071265 4827 generic.go:334] "Generic (PLEG): container finished" podID="6ca69756-a95e-4358-9238-8ebf213dd239" containerID="09cccfb2652366ecc5011d0b643001a192fb49dc24233023a7fe55251f095123" exitCode=0 Jan 26 09:30:02 crc kubenswrapper[4827]: I0126 09:30:02.071565 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" event={"ID":"6ca69756-a95e-4358-9238-8ebf213dd239","Type":"ContainerDied","Data":"09cccfb2652366ecc5011d0b643001a192fb49dc24233023a7fe55251f095123"} Jan 26 09:30:02 crc kubenswrapper[4827]: I0126 09:30:02.071594 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" event={"ID":"6ca69756-a95e-4358-9238-8ebf213dd239","Type":"ContainerStarted","Data":"8cd0f9d569d50153cad5caa52a0cb01319495cb17692a6232d3a33e92f77beeb"} Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.570246 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.586974 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6qdm\" (UniqueName: \"kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm\") pod \"6ca69756-a95e-4358-9238-8ebf213dd239\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.587062 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume\") pod \"6ca69756-a95e-4358-9238-8ebf213dd239\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.587195 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume\") pod \"6ca69756-a95e-4358-9238-8ebf213dd239\" (UID: \"6ca69756-a95e-4358-9238-8ebf213dd239\") " Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.587865 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ca69756-a95e-4358-9238-8ebf213dd239" (UID: "6ca69756-a95e-4358-9238-8ebf213dd239"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.588682 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca69756-a95e-4358-9238-8ebf213dd239-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.604909 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ca69756-a95e-4358-9238-8ebf213dd239" (UID: "6ca69756-a95e-4358-9238-8ebf213dd239"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.605139 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm" (OuterVolumeSpecName: "kube-api-access-n6qdm") pod "6ca69756-a95e-4358-9238-8ebf213dd239" (UID: "6ca69756-a95e-4358-9238-8ebf213dd239"). InnerVolumeSpecName "kube-api-access-n6qdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.689628 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6qdm\" (UniqueName: \"kubernetes.io/projected/6ca69756-a95e-4358-9238-8ebf213dd239-kube-api-access-n6qdm\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:03 crc kubenswrapper[4827]: I0126 09:30:03.689689 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca69756-a95e-4358-9238-8ebf213dd239-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:04 crc kubenswrapper[4827]: I0126 09:30:04.088831 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" event={"ID":"6ca69756-a95e-4358-9238-8ebf213dd239","Type":"ContainerDied","Data":"8cd0f9d569d50153cad5caa52a0cb01319495cb17692a6232d3a33e92f77beeb"} Jan 26 09:30:04 crc kubenswrapper[4827]: I0126 09:30:04.088880 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cd0f9d569d50153cad5caa52a0cb01319495cb17692a6232d3a33e92f77beeb" Jan 26 09:30:04 crc kubenswrapper[4827]: I0126 09:30:04.088882 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km" Jan 26 09:30:07 crc kubenswrapper[4827]: I0126 09:30:07.761305 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:30:07 crc kubenswrapper[4827]: I0126 09:30:07.858142 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:30:08 crc kubenswrapper[4827]: I0126 09:30:08.026011 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.137372 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xlspb" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="registry-server" containerID="cri-o://e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0" gracePeriod=2 Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.527008 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.705132 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities\") pod \"e35934a4-7c92-4d41-9842-a3bc234e0f28\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.705195 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffvv9\" (UniqueName: \"kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9\") pod \"e35934a4-7c92-4d41-9842-a3bc234e0f28\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.705333 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content\") pod \"e35934a4-7c92-4d41-9842-a3bc234e0f28\" (UID: \"e35934a4-7c92-4d41-9842-a3bc234e0f28\") " Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.707344 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities" (OuterVolumeSpecName: "utilities") pod "e35934a4-7c92-4d41-9842-a3bc234e0f28" (UID: "e35934a4-7c92-4d41-9842-a3bc234e0f28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.712937 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9" (OuterVolumeSpecName: "kube-api-access-ffvv9") pod "e35934a4-7c92-4d41-9842-a3bc234e0f28" (UID: "e35934a4-7c92-4d41-9842-a3bc234e0f28"). InnerVolumeSpecName "kube-api-access-ffvv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.807832 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffvv9\" (UniqueName: \"kubernetes.io/projected/e35934a4-7c92-4d41-9842-a3bc234e0f28-kube-api-access-ffvv9\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.807876 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.839016 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e35934a4-7c92-4d41-9842-a3bc234e0f28" (UID: "e35934a4-7c92-4d41-9842-a3bc234e0f28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:30:09 crc kubenswrapper[4827]: I0126 09:30:09.909342 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35934a4-7c92-4d41-9842-a3bc234e0f28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.147522 4827 generic.go:334] "Generic (PLEG): container finished" podID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerID="e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0" exitCode=0 Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.147566 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerDied","Data":"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0"} Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.147579 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlspb" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.147596 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlspb" event={"ID":"e35934a4-7c92-4d41-9842-a3bc234e0f28","Type":"ContainerDied","Data":"e297d2d53269641fb41752c04f51f08fbb7708b7dde2a7c5164b1d68989377c2"} Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.147616 4827 scope.go:117] "RemoveContainer" containerID="e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.168755 4827 scope.go:117] "RemoveContainer" containerID="fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.205644 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.205761 4827 scope.go:117] "RemoveContainer" containerID="720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.210963 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xlspb"] Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.246716 4827 scope.go:117] "RemoveContainer" containerID="e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0" Jan 26 09:30:10 crc kubenswrapper[4827]: E0126 09:30:10.247212 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0\": container with ID starting with e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0 not found: ID does not exist" containerID="e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.247266 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0"} err="failed to get container status \"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0\": rpc error: code = NotFound desc = could not find container \"e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0\": container with ID starting with e0fb841db557452a091a0e45102f797633a452dcbadd04833177442a02c4fcb0 not found: ID does not exist" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.247296 4827 scope.go:117] "RemoveContainer" containerID="fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5" Jan 26 09:30:10 crc kubenswrapper[4827]: E0126 09:30:10.247673 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5\": container with ID starting with fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5 not found: ID does not exist" containerID="fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.247699 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5"} err="failed to get container status \"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5\": rpc error: code = NotFound desc = could not find container \"fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5\": container with ID starting with fae3a87e5f7144bea4d3849afcba36d6f2e7e816315317bef8627f0e750275c5 not found: ID does not exist" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.247716 4827 scope.go:117] "RemoveContainer" containerID="720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832" Jan 26 09:30:10 crc kubenswrapper[4827]: E0126 09:30:10.248006 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832\": container with ID starting with 720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832 not found: ID does not exist" containerID="720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832" Jan 26 09:30:10 crc kubenswrapper[4827]: I0126 09:30:10.248036 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832"} err="failed to get container status \"720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832\": rpc error: code = NotFound desc = could not find container \"720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832\": container with ID starting with 720c55d079181dbeaf92f57e1bd1184bad96b97e0d144ee651fc9954f0522832 not found: ID does not exist" Jan 26 09:30:11 crc kubenswrapper[4827]: I0126 09:30:11.713618 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" path="/var/lib/kubelet/pods/e35934a4-7c92-4d41-9842-a3bc234e0f28/volumes" Jan 26 09:30:15 crc kubenswrapper[4827]: I0126 09:30:15.179188 4827 scope.go:117] "RemoveContainer" containerID="ffe0dc435e87cd81b51b6d32448bfdf905d104d3e9c2dc6f108e78ea68d586cb" Jan 26 09:31:12 crc kubenswrapper[4827]: I0126 09:31:12.268623 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:31:12 crc kubenswrapper[4827]: I0126 09:31:12.269529 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:31:15 crc kubenswrapper[4827]: I0126 09:31:15.285785 4827 scope.go:117] "RemoveContainer" containerID="bd8dc6f0d043ed7d40d6eca2f1fe2642cae1c36669b4c7e467562cbe17dbd00e" Jan 26 09:31:15 crc kubenswrapper[4827]: I0126 09:31:15.317440 4827 scope.go:117] "RemoveContainer" containerID="31bdf708d5b0eacca7094ada130a76850591ac86386f7e16fd5a6d231f186e89" Jan 26 09:31:15 crc kubenswrapper[4827]: I0126 09:31:15.392717 4827 scope.go:117] "RemoveContainer" containerID="56cab7b8704b3e651f9d40f0e0b77cbf71a4a53420baec52c47578fcafd29b84" Jan 26 09:31:15 crc kubenswrapper[4827]: I0126 09:31:15.418366 4827 scope.go:117] "RemoveContainer" containerID="f673a28c07db6e9b4fffa5321a6f782d479a60c08355ccffbe919ebd9373a64a" Jan 26 09:31:42 crc kubenswrapper[4827]: I0126 09:31:42.671245 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:31:42 crc kubenswrapper[4827]: I0126 09:31:42.671868 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.007850 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:31:54 crc kubenswrapper[4827]: E0126 09:31:54.009008 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="extract-content" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009032 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="extract-content" Jan 26 09:31:54 crc kubenswrapper[4827]: E0126 09:31:54.009074 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="extract-utilities" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009087 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="extract-utilities" Jan 26 09:31:54 crc kubenswrapper[4827]: E0126 09:31:54.009108 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="registry-server" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009119 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="registry-server" Jan 26 09:31:54 crc kubenswrapper[4827]: E0126 09:31:54.009135 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca69756-a95e-4358-9238-8ebf213dd239" containerName="collect-profiles" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009434 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca69756-a95e-4358-9238-8ebf213dd239" containerName="collect-profiles" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009733 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca69756-a95e-4358-9238-8ebf213dd239" containerName="collect-profiles" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.009773 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35934a4-7c92-4d41-9842-a3bc234e0f28" containerName="registry-server" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.011682 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.035347 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.074308 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.074730 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.074999 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qbjv\" (UniqueName: \"kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.175965 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.176056 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.176131 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qbjv\" (UniqueName: \"kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.176563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.178132 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.208117 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qbjv\" (UniqueName: \"kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv\") pod \"certified-operators-4kj2l\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.364814 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:31:54 crc kubenswrapper[4827]: I0126 09:31:54.792882 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:31:55 crc kubenswrapper[4827]: I0126 09:31:55.805814 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerID="1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c" exitCode=0 Jan 26 09:31:55 crc kubenswrapper[4827]: I0126 09:31:55.806129 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerDied","Data":"1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c"} Jan 26 09:31:55 crc kubenswrapper[4827]: I0126 09:31:55.806162 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerStarted","Data":"05f143a57cc07e3858337b5fdc952e97de03e7f2d07b60d727a5f10b66a5c3c8"} Jan 26 09:31:56 crc kubenswrapper[4827]: I0126 09:31:56.816314 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerStarted","Data":"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980"} Jan 26 09:31:57 crc kubenswrapper[4827]: I0126 09:31:57.825564 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerID="ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980" exitCode=0 Jan 26 09:31:57 crc kubenswrapper[4827]: I0126 09:31:57.825840 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerDied","Data":"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980"} Jan 26 09:31:58 crc kubenswrapper[4827]: I0126 09:31:58.835286 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerStarted","Data":"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a"} Jan 26 09:31:58 crc kubenswrapper[4827]: I0126 09:31:58.860532 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kj2l" podStartSLOduration=3.474674492 podStartE2EDuration="5.86051257s" podCreationTimestamp="2026-01-26 09:31:53 +0000 UTC" firstStartedPulling="2026-01-26 09:31:55.808678918 +0000 UTC m=+1544.457350727" lastFinishedPulling="2026-01-26 09:31:58.194516986 +0000 UTC m=+1546.843188805" observedRunningTime="2026-01-26 09:31:58.852186307 +0000 UTC m=+1547.500858146" watchObservedRunningTime="2026-01-26 09:31:58.86051257 +0000 UTC m=+1547.509184389" Jan 26 09:32:04 crc kubenswrapper[4827]: I0126 09:32:04.365626 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:04 crc kubenswrapper[4827]: I0126 09:32:04.366822 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:04 crc kubenswrapper[4827]: I0126 09:32:04.414257 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:04 crc kubenswrapper[4827]: I0126 09:32:04.927255 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:04 crc kubenswrapper[4827]: I0126 09:32:04.975004 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:32:06 crc kubenswrapper[4827]: I0126 09:32:06.894728 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kj2l" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="registry-server" containerID="cri-o://43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a" gracePeriod=2 Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.437833 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.543252 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qbjv\" (UniqueName: \"kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv\") pod \"9d0ad465-f37e-4510-a893-fe3d970a118f\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.544149 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content\") pod \"9d0ad465-f37e-4510-a893-fe3d970a118f\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.544273 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities\") pod \"9d0ad465-f37e-4510-a893-fe3d970a118f\" (UID: \"9d0ad465-f37e-4510-a893-fe3d970a118f\") " Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.544913 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities" (OuterVolumeSpecName: "utilities") pod "9d0ad465-f37e-4510-a893-fe3d970a118f" (UID: "9d0ad465-f37e-4510-a893-fe3d970a118f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.549693 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv" (OuterVolumeSpecName: "kube-api-access-7qbjv") pod "9d0ad465-f37e-4510-a893-fe3d970a118f" (UID: "9d0ad465-f37e-4510-a893-fe3d970a118f"). InnerVolumeSpecName "kube-api-access-7qbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.597064 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d0ad465-f37e-4510-a893-fe3d970a118f" (UID: "9d0ad465-f37e-4510-a893-fe3d970a118f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.647059 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.647099 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0ad465-f37e-4510-a893-fe3d970a118f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.647109 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qbjv\" (UniqueName: \"kubernetes.io/projected/9d0ad465-f37e-4510-a893-fe3d970a118f-kube-api-access-7qbjv\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.905856 4827 generic.go:334] "Generic (PLEG): container finished" podID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerID="43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a" exitCode=0 Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.905884 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kj2l" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.905905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerDied","Data":"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a"} Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.905937 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kj2l" event={"ID":"9d0ad465-f37e-4510-a893-fe3d970a118f","Type":"ContainerDied","Data":"05f143a57cc07e3858337b5fdc952e97de03e7f2d07b60d727a5f10b66a5c3c8"} Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.905959 4827 scope.go:117] "RemoveContainer" containerID="43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.931121 4827 scope.go:117] "RemoveContainer" containerID="ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.932071 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.941771 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kj2l"] Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.962600 4827 scope.go:117] "RemoveContainer" containerID="1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.989372 4827 scope.go:117] "RemoveContainer" containerID="43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a" Jan 26 09:32:07 crc kubenswrapper[4827]: E0126 09:32:07.989913 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a\": container with ID starting with 43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a not found: ID does not exist" containerID="43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.989956 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a"} err="failed to get container status \"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a\": rpc error: code = NotFound desc = could not find container \"43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a\": container with ID starting with 43667dc984887aac2fda4f659786f6eb6b8658089b9247791b2fb37b696efd2a not found: ID does not exist" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.989982 4827 scope.go:117] "RemoveContainer" containerID="ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980" Jan 26 09:32:07 crc kubenswrapper[4827]: E0126 09:32:07.990707 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980\": container with ID starting with ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980 not found: ID does not exist" containerID="ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.990731 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980"} err="failed to get container status \"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980\": rpc error: code = NotFound desc = could not find container \"ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980\": container with ID starting with ba7c90c600ea2f4dc230689770cdf5722cce1c320e00b9b1fba0dfc3b7569980 not found: ID does not exist" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.990745 4827 scope.go:117] "RemoveContainer" containerID="1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c" Jan 26 09:32:07 crc kubenswrapper[4827]: E0126 09:32:07.991037 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c\": container with ID starting with 1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c not found: ID does not exist" containerID="1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c" Jan 26 09:32:07 crc kubenswrapper[4827]: I0126 09:32:07.991078 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c"} err="failed to get container status \"1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c\": rpc error: code = NotFound desc = could not find container \"1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c\": container with ID starting with 1051b2e73dc51b6a145fe0bbea096e214a658502732640bcd217065e4c3b994c not found: ID does not exist" Jan 26 09:32:09 crc kubenswrapper[4827]: I0126 09:32:09.713131 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" path="/var/lib/kubelet/pods/9d0ad465-f37e-4510-a893-fe3d970a118f/volumes" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.269166 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.270273 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.270437 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.271253 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.271408 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" gracePeriod=600 Jan 26 09:32:12 crc kubenswrapper[4827]: E0126 09:32:12.461986 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.949776 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" exitCode=0 Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.949827 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702"} Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.949867 4827 scope.go:117] "RemoveContainer" containerID="1c3223752972e038be12eb72189f55b795f27b1dd36acdb934d6a50aaf1c22e1" Jan 26 09:32:12 crc kubenswrapper[4827]: I0126 09:32:12.950614 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:32:12 crc kubenswrapper[4827]: E0126 09:32:12.951009 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:32:28 crc kubenswrapper[4827]: I0126 09:32:28.703258 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:32:28 crc kubenswrapper[4827]: E0126 09:32:28.704252 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.231744 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:29 crc kubenswrapper[4827]: E0126 09:32:29.232376 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="extract-utilities" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.232423 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="extract-utilities" Jan 26 09:32:29 crc kubenswrapper[4827]: E0126 09:32:29.232437 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="extract-content" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.232447 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="extract-content" Jan 26 09:32:29 crc kubenswrapper[4827]: E0126 09:32:29.232495 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="registry-server" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.232506 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="registry-server" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.232861 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0ad465-f37e-4510-a893-fe3d970a118f" containerName="registry-server" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.234958 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.235089 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.291454 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.291519 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.291620 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpbz\" (UniqueName: \"kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.392945 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.393002 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.393046 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpbz\" (UniqueName: \"kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.393476 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.393527 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.414676 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpbz\" (UniqueName: \"kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz\") pod \"redhat-marketplace-4mrlg\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:29 crc kubenswrapper[4827]: I0126 09:32:29.560194 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:30 crc kubenswrapper[4827]: I0126 09:32:30.047254 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:30 crc kubenswrapper[4827]: I0126 09:32:30.128319 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerStarted","Data":"f444097be15df254a0638e3a2670f63dfef2889f9573bc2fd18b5fa2eb342de5"} Jan 26 09:32:31 crc kubenswrapper[4827]: I0126 09:32:31.140022 4827 generic.go:334] "Generic (PLEG): container finished" podID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerID="883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f" exitCode=0 Jan 26 09:32:31 crc kubenswrapper[4827]: I0126 09:32:31.140128 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerDied","Data":"883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f"} Jan 26 09:32:32 crc kubenswrapper[4827]: I0126 09:32:32.162418 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerStarted","Data":"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c"} Jan 26 09:32:33 crc kubenswrapper[4827]: I0126 09:32:33.172991 4827 generic.go:334] "Generic (PLEG): container finished" podID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerID="c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c" exitCode=0 Jan 26 09:32:33 crc kubenswrapper[4827]: I0126 09:32:33.173073 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerDied","Data":"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c"} Jan 26 09:32:35 crc kubenswrapper[4827]: I0126 09:32:35.198267 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerStarted","Data":"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0"} Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.733698 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4mrlg" podStartSLOduration=6.742928012 podStartE2EDuration="9.733681652s" podCreationTimestamp="2026-01-26 09:32:29 +0000 UTC" firstStartedPulling="2026-01-26 09:32:31.141606845 +0000 UTC m=+1579.790278664" lastFinishedPulling="2026-01-26 09:32:34.132360485 +0000 UTC m=+1582.781032304" observedRunningTime="2026-01-26 09:32:35.224351501 +0000 UTC m=+1583.873023320" watchObservedRunningTime="2026-01-26 09:32:38.733681652 +0000 UTC m=+1587.382353471" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.738013 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.742745 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.750282 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.869818 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.870197 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.870960 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.973422 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.973858 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.973969 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.974656 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:38 crc kubenswrapper[4827]: I0126 09:32:38.975322 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.001486 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h\") pod \"community-operators-qpsjj\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.069548 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.561075 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.561353 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.564503 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:39 crc kubenswrapper[4827]: I0126 09:32:39.627289 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:40 crc kubenswrapper[4827]: I0126 09:32:40.252003 4827 generic.go:334] "Generic (PLEG): container finished" podID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerID="5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86" exitCode=0 Jan 26 09:32:40 crc kubenswrapper[4827]: I0126 09:32:40.252111 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerDied","Data":"5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86"} Jan 26 09:32:40 crc kubenswrapper[4827]: I0126 09:32:40.253091 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerStarted","Data":"c04a492f8c354c4f65aa6b6869e61e8e139c6e4c840b20d6709f4ad9786d73a5"} Jan 26 09:32:40 crc kubenswrapper[4827]: I0126 09:32:40.323887 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:41 crc kubenswrapper[4827]: I0126 09:32:41.302568 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerStarted","Data":"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6"} Jan 26 09:32:41 crc kubenswrapper[4827]: I0126 09:32:41.912356 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:42 crc kubenswrapper[4827]: I0126 09:32:42.310602 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4mrlg" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="registry-server" containerID="cri-o://aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0" gracePeriod=2 Jan 26 09:32:42 crc kubenswrapper[4827]: I0126 09:32:42.702804 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:32:42 crc kubenswrapper[4827]: E0126 09:32:42.703130 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.266067 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.320214 4827 generic.go:334] "Generic (PLEG): container finished" podID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerID="84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6" exitCode=0 Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.320289 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerDied","Data":"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6"} Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.325588 4827 generic.go:334] "Generic (PLEG): container finished" podID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerID="aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0" exitCode=0 Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.325612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerDied","Data":"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0"} Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.325630 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mrlg" event={"ID":"07b338b6-82dc-44ee-b2be-6486dabfa179","Type":"ContainerDied","Data":"f444097be15df254a0638e3a2670f63dfef2889f9573bc2fd18b5fa2eb342de5"} Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.325663 4827 scope.go:117] "RemoveContainer" containerID="aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.325807 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mrlg" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.352944 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpbz\" (UniqueName: \"kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz\") pod \"07b338b6-82dc-44ee-b2be-6486dabfa179\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.353567 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities\") pod \"07b338b6-82dc-44ee-b2be-6486dabfa179\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.353609 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content\") pod \"07b338b6-82dc-44ee-b2be-6486dabfa179\" (UID: \"07b338b6-82dc-44ee-b2be-6486dabfa179\") " Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.355257 4827 scope.go:117] "RemoveContainer" containerID="c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.357398 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities" (OuterVolumeSpecName: "utilities") pod "07b338b6-82dc-44ee-b2be-6486dabfa179" (UID: "07b338b6-82dc-44ee-b2be-6486dabfa179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.378955 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz" (OuterVolumeSpecName: "kube-api-access-7hpbz") pod "07b338b6-82dc-44ee-b2be-6486dabfa179" (UID: "07b338b6-82dc-44ee-b2be-6486dabfa179"). InnerVolumeSpecName "kube-api-access-7hpbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.381208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b338b6-82dc-44ee-b2be-6486dabfa179" (UID: "07b338b6-82dc-44ee-b2be-6486dabfa179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.420408 4827 scope.go:117] "RemoveContainer" containerID="883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.457118 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.457155 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b338b6-82dc-44ee-b2be-6486dabfa179-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.457213 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpbz\" (UniqueName: \"kubernetes.io/projected/07b338b6-82dc-44ee-b2be-6486dabfa179-kube-api-access-7hpbz\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.466184 4827 scope.go:117] "RemoveContainer" containerID="aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0" Jan 26 09:32:43 crc kubenswrapper[4827]: E0126 09:32:43.466611 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0\": container with ID starting with aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0 not found: ID does not exist" containerID="aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.466679 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0"} err="failed to get container status \"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0\": rpc error: code = NotFound desc = could not find container \"aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0\": container with ID starting with aafe0e97befe772c67e5b1650d46e8b6456bf24b63fe65d57e2ea0fa0e3a90d0 not found: ID does not exist" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.466701 4827 scope.go:117] "RemoveContainer" containerID="c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c" Jan 26 09:32:43 crc kubenswrapper[4827]: E0126 09:32:43.467223 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c\": container with ID starting with c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c not found: ID does not exist" containerID="c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.467248 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c"} err="failed to get container status \"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c\": rpc error: code = NotFound desc = could not find container \"c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c\": container with ID starting with c1a130f3648ef51e830fb56b58dfd283ab84a821f922dcb09ca07ce58415c73c not found: ID does not exist" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.467265 4827 scope.go:117] "RemoveContainer" containerID="883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f" Jan 26 09:32:43 crc kubenswrapper[4827]: E0126 09:32:43.467515 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f\": container with ID starting with 883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f not found: ID does not exist" containerID="883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.467533 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f"} err="failed to get container status \"883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f\": rpc error: code = NotFound desc = could not find container \"883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f\": container with ID starting with 883a5b69fd63f81c3e70cc15a2b056705983f82ade460599e4b1093c142b5c4f not found: ID does not exist" Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.661058 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.675428 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mrlg"] Jan 26 09:32:43 crc kubenswrapper[4827]: I0126 09:32:43.721354 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" path="/var/lib/kubelet/pods/07b338b6-82dc-44ee-b2be-6486dabfa179/volumes" Jan 26 09:32:44 crc kubenswrapper[4827]: I0126 09:32:44.335137 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerStarted","Data":"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2"} Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.070507 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.071162 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.127263 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.157970 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qpsjj" podStartSLOduration=7.668640074 podStartE2EDuration="11.157955617s" podCreationTimestamp="2026-01-26 09:32:38 +0000 UTC" firstStartedPulling="2026-01-26 09:32:40.253783957 +0000 UTC m=+1588.902455776" lastFinishedPulling="2026-01-26 09:32:43.74309949 +0000 UTC m=+1592.391771319" observedRunningTime="2026-01-26 09:32:44.364486295 +0000 UTC m=+1593.013158104" watchObservedRunningTime="2026-01-26 09:32:49.157955617 +0000 UTC m=+1597.806627436" Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.420928 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:49 crc kubenswrapper[4827]: I0126 09:32:49.471251 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.388867 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qpsjj" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="registry-server" containerID="cri-o://f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2" gracePeriod=2 Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.840881 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.968880 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h\") pod \"13c44f79-067f-4759-89cc-3050a8dd7e15\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.969014 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities\") pod \"13c44f79-067f-4759-89cc-3050a8dd7e15\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.969051 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content\") pod \"13c44f79-067f-4759-89cc-3050a8dd7e15\" (UID: \"13c44f79-067f-4759-89cc-3050a8dd7e15\") " Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.970121 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities" (OuterVolumeSpecName: "utilities") pod "13c44f79-067f-4759-89cc-3050a8dd7e15" (UID: "13c44f79-067f-4759-89cc-3050a8dd7e15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:51 crc kubenswrapper[4827]: I0126 09:32:51.987883 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h" (OuterVolumeSpecName: "kube-api-access-jbc6h") pod "13c44f79-067f-4759-89cc-3050a8dd7e15" (UID: "13c44f79-067f-4759-89cc-3050a8dd7e15"). InnerVolumeSpecName "kube-api-access-jbc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.033676 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13c44f79-067f-4759-89cc-3050a8dd7e15" (UID: "13c44f79-067f-4759-89cc-3050a8dd7e15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.071085 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.071116 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c44f79-067f-4759-89cc-3050a8dd7e15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.071127 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/13c44f79-067f-4759-89cc-3050a8dd7e15-kube-api-access-jbc6h\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.399724 4827 generic.go:334] "Generic (PLEG): container finished" podID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerID="f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2" exitCode=0 Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.399773 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerDied","Data":"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2"} Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.399798 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpsjj" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.399818 4827 scope.go:117] "RemoveContainer" containerID="f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.399806 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpsjj" event={"ID":"13c44f79-067f-4759-89cc-3050a8dd7e15","Type":"ContainerDied","Data":"c04a492f8c354c4f65aa6b6869e61e8e139c6e4c840b20d6709f4ad9786d73a5"} Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.423015 4827 scope.go:117] "RemoveContainer" containerID="84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.466562 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.473629 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qpsjj"] Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.480373 4827 scope.go:117] "RemoveContainer" containerID="5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.507618 4827 scope.go:117] "RemoveContainer" containerID="f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2" Jan 26 09:32:52 crc kubenswrapper[4827]: E0126 09:32:52.508108 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2\": container with ID starting with f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2 not found: ID does not exist" containerID="f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.508145 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2"} err="failed to get container status \"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2\": rpc error: code = NotFound desc = could not find container \"f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2\": container with ID starting with f2cd00034838b95fdb385d093353305495480ca7906904884a5ef4d9a12173c2 not found: ID does not exist" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.508169 4827 scope.go:117] "RemoveContainer" containerID="84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6" Jan 26 09:32:52 crc kubenswrapper[4827]: E0126 09:32:52.508544 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6\": container with ID starting with 84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6 not found: ID does not exist" containerID="84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.508588 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6"} err="failed to get container status \"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6\": rpc error: code = NotFound desc = could not find container \"84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6\": container with ID starting with 84ce1e9a4a0ef14435ae7b18814b18decd9fa12ee7dc14d54bda58982e2db1c6 not found: ID does not exist" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.508623 4827 scope.go:117] "RemoveContainer" containerID="5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86" Jan 26 09:32:52 crc kubenswrapper[4827]: E0126 09:32:52.508895 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86\": container with ID starting with 5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86 not found: ID does not exist" containerID="5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86" Jan 26 09:32:52 crc kubenswrapper[4827]: I0126 09:32:52.508955 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86"} err="failed to get container status \"5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86\": rpc error: code = NotFound desc = could not find container \"5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86\": container with ID starting with 5d2f85564edac3927796dc2d8ce831815462c9fdcf4803cc9de2b026bd917b86 not found: ID does not exist" Jan 26 09:32:53 crc kubenswrapper[4827]: I0126 09:32:53.715178 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" path="/var/lib/kubelet/pods/13c44f79-067f-4759-89cc-3050a8dd7e15/volumes" Jan 26 09:32:56 crc kubenswrapper[4827]: I0126 09:32:56.703121 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:32:56 crc kubenswrapper[4827]: E0126 09:32:56.703598 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:33:07 crc kubenswrapper[4827]: I0126 09:33:07.703090 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:33:07 crc kubenswrapper[4827]: E0126 09:33:07.703948 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:33:11 crc kubenswrapper[4827]: I0126 09:33:11.582142 4827 generic.go:334] "Generic (PLEG): container finished" podID="c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" containerID="12aab131f1dc47605dbbc80b0ed0f97ba005b574ec162eed75a9bb3250d950f1" exitCode=0 Jan 26 09:33:11 crc kubenswrapper[4827]: I0126 09:33:11.582235 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" event={"ID":"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8","Type":"ContainerDied","Data":"12aab131f1dc47605dbbc80b0ed0f97ba005b574ec162eed75a9bb3250d950f1"} Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.005571 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.147529 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory\") pod \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.147717 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj6x9\" (UniqueName: \"kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9\") pod \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.147743 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle\") pod \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.147847 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam\") pod \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\" (UID: \"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8\") " Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.161998 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9" (OuterVolumeSpecName: "kube-api-access-tj6x9") pod "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" (UID: "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8"). InnerVolumeSpecName "kube-api-access-tj6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.168557 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" (UID: "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.171903 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory" (OuterVolumeSpecName: "inventory") pod "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" (UID: "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.175528 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" (UID: "c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.249964 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.250005 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj6x9\" (UniqueName: \"kubernetes.io/projected/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-kube-api-access-tj6x9\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.250018 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.250026 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.602009 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" event={"ID":"c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8","Type":"ContainerDied","Data":"571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f"} Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.602072 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571eab8631edbdd81628fd9d25077c20bab732939528cb3348077dcfeb7db09f" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.602112 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702119 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq"] Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702530 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702551 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702563 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="extract-content" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702571 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="extract-content" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702587 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="extract-utilities" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702596 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="extract-utilities" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702608 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="extract-content" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702616 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="extract-content" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702657 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702667 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702680 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702688 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: E0126 09:33:13.702708 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="extract-utilities" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702715 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="extract-utilities" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702895 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702914 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b338b6-82dc-44ee-b2be-6486dabfa179" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.702934 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c44f79-067f-4759-89cc-3050a8dd7e15" containerName="registry-server" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.703727 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.706434 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.706770 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.710042 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.711770 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.728710 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq"] Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.758800 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.758917 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptztj\" (UniqueName: \"kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.759004 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.860288 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptztj\" (UniqueName: \"kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.860625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.860767 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.864045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.865699 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:13 crc kubenswrapper[4827]: I0126 09:33:13.876626 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptztj\" (UniqueName: \"kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:14 crc kubenswrapper[4827]: I0126 09:33:14.021679 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:33:14 crc kubenswrapper[4827]: I0126 09:33:14.683415 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq"] Jan 26 09:33:14 crc kubenswrapper[4827]: W0126 09:33:14.704925 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a032cd5_c61d_41a6_871b_998a68d29913.slice/crio-543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059 WatchSource:0}: Error finding container 543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059: Status 404 returned error can't find the container with id 543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059 Jan 26 09:33:15 crc kubenswrapper[4827]: I0126 09:33:15.527936 4827 scope.go:117] "RemoveContainer" containerID="e74b6d2c347eae02f2e5bc470b3d40897bb4c62e9a68f436fb7609ab7e87112e" Jan 26 09:33:15 crc kubenswrapper[4827]: I0126 09:33:15.565605 4827 scope.go:117] "RemoveContainer" containerID="87bd9c50a25fcebb45a76249b643413fa576917391bb4420549f676614c42251" Jan 26 09:33:15 crc kubenswrapper[4827]: I0126 09:33:15.649970 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" event={"ID":"5a032cd5-c61d-41a6-871b-998a68d29913","Type":"ContainerStarted","Data":"7b92c3f9106e34b5644403a334d94c2033f9028a3766ebb00d5fd3dbaba2a815"} Jan 26 09:33:15 crc kubenswrapper[4827]: I0126 09:33:15.650009 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" event={"ID":"5a032cd5-c61d-41a6-871b-998a68d29913","Type":"ContainerStarted","Data":"543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059"} Jan 26 09:33:15 crc kubenswrapper[4827]: I0126 09:33:15.670373 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" podStartSLOduration=2.244866496 podStartE2EDuration="2.670357782s" podCreationTimestamp="2026-01-26 09:33:13 +0000 UTC" firstStartedPulling="2026-01-26 09:33:14.70745918 +0000 UTC m=+1623.356130999" lastFinishedPulling="2026-01-26 09:33:15.132950466 +0000 UTC m=+1623.781622285" observedRunningTime="2026-01-26 09:33:15.669327564 +0000 UTC m=+1624.317999393" watchObservedRunningTime="2026-01-26 09:33:15.670357782 +0000 UTC m=+1624.319029601" Jan 26 09:33:18 crc kubenswrapper[4827]: I0126 09:33:18.702570 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:33:18 crc kubenswrapper[4827]: E0126 09:33:18.703414 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:33:30 crc kubenswrapper[4827]: I0126 09:33:30.702445 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:33:30 crc kubenswrapper[4827]: E0126 09:33:30.703150 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:33:45 crc kubenswrapper[4827]: I0126 09:33:45.702838 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:33:45 crc kubenswrapper[4827]: E0126 09:33:45.703734 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:33:58 crc kubenswrapper[4827]: I0126 09:33:58.702794 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:33:58 crc kubenswrapper[4827]: E0126 09:33:58.704555 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.052862 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ckrtf"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.064746 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lwvw2"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.077202 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5s5pt"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.087245 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-be69-account-create-update-vh7s6"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.095769 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-62d5-account-create-update-q6kbw"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.104395 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-430e-account-create-update-c2ksl"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.113307 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ckrtf"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.121438 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lwvw2"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.129216 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-430e-account-create-update-c2ksl"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.137116 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-62d5-account-create-update-q6kbw"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.145699 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5s5pt"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.153716 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-be69-account-create-update-vh7s6"] Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.713845 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4801aa52-328a-4300-bc01-6c9a5455395e" path="/var/lib/kubelet/pods/4801aa52-328a-4300-bc01-6c9a5455395e/volumes" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.714435 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e" path="/var/lib/kubelet/pods/83ff3f7a-0ee0-4c27-92ea-8dcfcd44b99e/volumes" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.714963 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b379f41-b9fc-49f0-a30e-fb5611d5f043" path="/var/lib/kubelet/pods/8b379f41-b9fc-49f0-a30e-fb5611d5f043/volumes" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.715466 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdad4441-f327-4a21-8c0a-7f86ac1df4b4" path="/var/lib/kubelet/pods/cdad4441-f327-4a21-8c0a-7f86ac1df4b4/volumes" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.716422 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce90226f-115c-42f6-bf51-1451a31d647c" path="/var/lib/kubelet/pods/ce90226f-115c-42f6-bf51-1451a31d647c/volumes" Jan 26 09:34:05 crc kubenswrapper[4827]: I0126 09:34:05.717008 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c91b0d-9e51-4065-9eee-fde2e4971bf4" path="/var/lib/kubelet/pods/e5c91b0d-9e51-4065-9eee-fde2e4971bf4/volumes" Jan 26 09:34:13 crc kubenswrapper[4827]: I0126 09:34:13.703022 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:34:13 crc kubenswrapper[4827]: E0126 09:34:13.703783 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.664020 4827 scope.go:117] "RemoveContainer" containerID="c0db02b1f78d171d8cf05abaf9b3f123b22aefc2d355f80354be8960c48298be" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.712737 4827 scope.go:117] "RemoveContainer" containerID="877cdcd7c56b635a01a233b79b110bb95509a7f12ad63fc5139afd8a76dadba1" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.742532 4827 scope.go:117] "RemoveContainer" containerID="6cd014ea888aa55f66db481e5e74a96c2266b93ec93345ba81e0a5026b8298d1" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.793885 4827 scope.go:117] "RemoveContainer" containerID="ee3aac523df6de531a19ed702d0956ba0a5ef3dab55508edbef4630686f21937" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.837551 4827 scope.go:117] "RemoveContainer" containerID="7966158be522860f67ba34272cf848ac0cd9680c42b2051129c6b6e1d0311cde" Jan 26 09:34:15 crc kubenswrapper[4827]: I0126 09:34:15.873052 4827 scope.go:117] "RemoveContainer" containerID="6f13e964074e4583f4cb88b0008a4a0282d2850f2182ac1fd10db6b9c3d9024f" Jan 26 09:34:25 crc kubenswrapper[4827]: I0126 09:34:25.703496 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:34:25 crc kubenswrapper[4827]: E0126 09:34:25.704242 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.047728 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6b35-account-create-update-gsp2f"] Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.055471 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fkrg2"] Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.068086 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-57bw5"] Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.078102 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fkrg2"] Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.085625 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6b35-account-create-update-gsp2f"] Jan 26 09:34:26 crc kubenswrapper[4827]: I0126 09:34:26.092827 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-57bw5"] Jan 26 09:34:27 crc kubenswrapper[4827]: I0126 09:34:27.718552 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1191821d-a29f-4a52-ae1a-29659e28f5dc" path="/var/lib/kubelet/pods/1191821d-a29f-4a52-ae1a-29659e28f5dc/volumes" Jan 26 09:34:27 crc kubenswrapper[4827]: I0126 09:34:27.719246 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134e2635-96c4-478c-85c3-1bf9b44ad38a" path="/var/lib/kubelet/pods/134e2635-96c4-478c-85c3-1bf9b44ad38a/volumes" Jan 26 09:34:27 crc kubenswrapper[4827]: I0126 09:34:27.719919 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6985b23f-09b2-473e-bdbf-b0c115a93ca0" path="/var/lib/kubelet/pods/6985b23f-09b2-473e-bdbf-b0c115a93ca0/volumes" Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.051324 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f048-account-create-update-fnbq9"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.064718 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-v9zpk"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.084012 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0fe9-account-create-update-rx7j2"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.128648 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7qgkw"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.135249 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f048-account-create-update-fnbq9"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.143469 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0fe9-account-create-update-rx7j2"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.152968 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-v9zpk"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.159563 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7qgkw"] Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.713093 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a210d5f-6a79-48a8-90af-ccef43549ff7" path="/var/lib/kubelet/pods/2a210d5f-6a79-48a8-90af-ccef43549ff7/volumes" Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.713918 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5641f3e6-baea-4786-bb73-175101a77bc5" path="/var/lib/kubelet/pods/5641f3e6-baea-4786-bb73-175101a77bc5/volumes" Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.714422 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bb7463-e3b4-42cd-ba68-4a2361ec5a6a" path="/var/lib/kubelet/pods/68bb7463-e3b4-42cd-ba68-4a2361ec5a6a/volumes" Jan 26 09:34:29 crc kubenswrapper[4827]: I0126 09:34:29.714950 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2" path="/var/lib/kubelet/pods/cf8ff7bc-05fb-4ddf-81bf-c1858dc6f4c2/volumes" Jan 26 09:34:33 crc kubenswrapper[4827]: I0126 09:34:33.033818 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jhkcw"] Jan 26 09:34:33 crc kubenswrapper[4827]: I0126 09:34:33.044387 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jhkcw"] Jan 26 09:34:33 crc kubenswrapper[4827]: I0126 09:34:33.712630 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f" path="/var/lib/kubelet/pods/bb5b9a93-cbc4-4a7f-8bbc-ecfe7c94ff3f/volumes" Jan 26 09:34:34 crc kubenswrapper[4827]: I0126 09:34:34.036052 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-lw6v4"] Jan 26 09:34:34 crc kubenswrapper[4827]: I0126 09:34:34.045442 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-lw6v4"] Jan 26 09:34:35 crc kubenswrapper[4827]: I0126 09:34:35.380506 4827 generic.go:334] "Generic (PLEG): container finished" podID="5a032cd5-c61d-41a6-871b-998a68d29913" containerID="7b92c3f9106e34b5644403a334d94c2033f9028a3766ebb00d5fd3dbaba2a815" exitCode=0 Jan 26 09:34:35 crc kubenswrapper[4827]: I0126 09:34:35.380556 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" event={"ID":"5a032cd5-c61d-41a6-871b-998a68d29913","Type":"ContainerDied","Data":"7b92c3f9106e34b5644403a334d94c2033f9028a3766ebb00d5fd3dbaba2a815"} Jan 26 09:34:35 crc kubenswrapper[4827]: I0126 09:34:35.718231 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203375e2-091f-4589-a8a6-e12f7af8a24d" path="/var/lib/kubelet/pods/203375e2-091f-4589-a8a6-e12f7af8a24d/volumes" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.762225 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.878250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptztj\" (UniqueName: \"kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj\") pod \"5a032cd5-c61d-41a6-871b-998a68d29913\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.878563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory\") pod \"5a032cd5-c61d-41a6-871b-998a68d29913\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.878710 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam\") pod \"5a032cd5-c61d-41a6-871b-998a68d29913\" (UID: \"5a032cd5-c61d-41a6-871b-998a68d29913\") " Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.883237 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj" (OuterVolumeSpecName: "kube-api-access-ptztj") pod "5a032cd5-c61d-41a6-871b-998a68d29913" (UID: "5a032cd5-c61d-41a6-871b-998a68d29913"). InnerVolumeSpecName "kube-api-access-ptztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.903330 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a032cd5-c61d-41a6-871b-998a68d29913" (UID: "5a032cd5-c61d-41a6-871b-998a68d29913"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.904005 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory" (OuterVolumeSpecName: "inventory") pod "5a032cd5-c61d-41a6-871b-998a68d29913" (UID: "5a032cd5-c61d-41a6-871b-998a68d29913"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.981493 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptztj\" (UniqueName: \"kubernetes.io/projected/5a032cd5-c61d-41a6-871b-998a68d29913-kube-api-access-ptztj\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.981542 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:36 crc kubenswrapper[4827]: I0126 09:34:36.981555 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a032cd5-c61d-41a6-871b-998a68d29913-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.396101 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" event={"ID":"5a032cd5-c61d-41a6-871b-998a68d29913","Type":"ContainerDied","Data":"543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059"} Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.396153 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543310dfa4bbbc6d643f849377526e4f7f0ab646e79bd4cc34611dcd6c8d8059" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.396165 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.485956 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd"] Jan 26 09:34:37 crc kubenswrapper[4827]: E0126 09:34:37.486434 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a032cd5-c61d-41a6-871b-998a68d29913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.486458 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a032cd5-c61d-41a6-871b-998a68d29913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.486677 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a032cd5-c61d-41a6-871b-998a68d29913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.487436 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.492856 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.493056 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.493092 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.493848 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.506836 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd"] Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.592118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.592399 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8j8n\" (UniqueName: \"kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.592582 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.694744 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.694869 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8j8n\" (UniqueName: \"kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.694955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.698505 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.699038 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.715897 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8j8n\" (UniqueName: \"kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:37 crc kubenswrapper[4827]: I0126 09:34:37.818519 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:38 crc kubenswrapper[4827]: I0126 09:34:38.331409 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd"] Jan 26 09:34:38 crc kubenswrapper[4827]: I0126 09:34:38.334864 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:34:38 crc kubenswrapper[4827]: I0126 09:34:38.406211 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" event={"ID":"d2263311-624a-49ff-870a-14334cffbc56","Type":"ContainerStarted","Data":"a8a6c41ba46cadd6dba83770aa808d718cbc8d3328be4b1a088cc7b758909e15"} Jan 26 09:34:39 crc kubenswrapper[4827]: I0126 09:34:39.416988 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" event={"ID":"d2263311-624a-49ff-870a-14334cffbc56","Type":"ContainerStarted","Data":"08f1c46c3cf919c4718ec29260da95ea7347da0096f77291766f5527921e3f89"} Jan 26 09:34:39 crc kubenswrapper[4827]: I0126 09:34:39.439816 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" podStartSLOduration=1.972763789 podStartE2EDuration="2.439794717s" podCreationTimestamp="2026-01-26 09:34:37 +0000 UTC" firstStartedPulling="2026-01-26 09:34:38.33462481 +0000 UTC m=+1706.983296629" lastFinishedPulling="2026-01-26 09:34:38.801655738 +0000 UTC m=+1707.450327557" observedRunningTime="2026-01-26 09:34:39.432758913 +0000 UTC m=+1708.081430732" watchObservedRunningTime="2026-01-26 09:34:39.439794717 +0000 UTC m=+1708.088466536" Jan 26 09:34:39 crc kubenswrapper[4827]: I0126 09:34:39.703407 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:34:39 crc kubenswrapper[4827]: E0126 09:34:39.703868 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:34:45 crc kubenswrapper[4827]: I0126 09:34:45.491932 4827 generic.go:334] "Generic (PLEG): container finished" podID="d2263311-624a-49ff-870a-14334cffbc56" containerID="08f1c46c3cf919c4718ec29260da95ea7347da0096f77291766f5527921e3f89" exitCode=0 Jan 26 09:34:45 crc kubenswrapper[4827]: I0126 09:34:45.492036 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" event={"ID":"d2263311-624a-49ff-870a-14334cffbc56","Type":"ContainerDied","Data":"08f1c46c3cf919c4718ec29260da95ea7347da0096f77291766f5527921e3f89"} Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.885064 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.959044 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory\") pod \"d2263311-624a-49ff-870a-14334cffbc56\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.959142 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam\") pod \"d2263311-624a-49ff-870a-14334cffbc56\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.959165 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8j8n\" (UniqueName: \"kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n\") pod \"d2263311-624a-49ff-870a-14334cffbc56\" (UID: \"d2263311-624a-49ff-870a-14334cffbc56\") " Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.969544 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n" (OuterVolumeSpecName: "kube-api-access-m8j8n") pod "d2263311-624a-49ff-870a-14334cffbc56" (UID: "d2263311-624a-49ff-870a-14334cffbc56"). InnerVolumeSpecName "kube-api-access-m8j8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.990299 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2263311-624a-49ff-870a-14334cffbc56" (UID: "d2263311-624a-49ff-870a-14334cffbc56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:34:46 crc kubenswrapper[4827]: I0126 09:34:46.992520 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory" (OuterVolumeSpecName: "inventory") pod "d2263311-624a-49ff-870a-14334cffbc56" (UID: "d2263311-624a-49ff-870a-14334cffbc56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.061141 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.061366 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2263311-624a-49ff-870a-14334cffbc56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.061482 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8j8n\" (UniqueName: \"kubernetes.io/projected/d2263311-624a-49ff-870a-14334cffbc56-kube-api-access-m8j8n\") on node \"crc\" DevicePath \"\"" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.514570 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" event={"ID":"d2263311-624a-49ff-870a-14334cffbc56","Type":"ContainerDied","Data":"a8a6c41ba46cadd6dba83770aa808d718cbc8d3328be4b1a088cc7b758909e15"} Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.514919 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a6c41ba46cadd6dba83770aa808d718cbc8d3328be4b1a088cc7b758909e15" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.514677 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.584726 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j"] Jan 26 09:34:47 crc kubenswrapper[4827]: E0126 09:34:47.585105 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2263311-624a-49ff-870a-14334cffbc56" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.585126 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2263311-624a-49ff-870a-14334cffbc56" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.585304 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2263311-624a-49ff-870a-14334cffbc56" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.585931 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.589428 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.597298 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.597320 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.597554 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.598387 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j"] Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.672519 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.672600 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.672920 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6wl\" (UniqueName: \"kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.774767 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6wl\" (UniqueName: \"kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.775029 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.775135 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.780410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.790862 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.791463 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6wl\" (UniqueName: \"kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tdk6j\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:47 crc kubenswrapper[4827]: I0126 09:34:47.909700 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:34:48 crc kubenswrapper[4827]: I0126 09:34:48.462167 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j"] Jan 26 09:34:48 crc kubenswrapper[4827]: I0126 09:34:48.523841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" event={"ID":"f543c02c-f09c-49aa-950c-d74789684e3a","Type":"ContainerStarted","Data":"344b2de16afe01614f7f647d0af6056d573b741538c4a31ffe2ebf7382ea6031"} Jan 26 09:34:49 crc kubenswrapper[4827]: I0126 09:34:49.533050 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" event={"ID":"f543c02c-f09c-49aa-950c-d74789684e3a","Type":"ContainerStarted","Data":"62c5c8137a8e3c06974f5d220803dac864259f04135c8c8a3da672eb6c27710a"} Jan 26 09:34:49 crc kubenswrapper[4827]: I0126 09:34:49.559029 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" podStartSLOduration=1.97085773 podStartE2EDuration="2.55900789s" podCreationTimestamp="2026-01-26 09:34:47 +0000 UTC" firstStartedPulling="2026-01-26 09:34:48.456679492 +0000 UTC m=+1717.105351321" lastFinishedPulling="2026-01-26 09:34:49.044829662 +0000 UTC m=+1717.693501481" observedRunningTime="2026-01-26 09:34:49.554469325 +0000 UTC m=+1718.203141144" watchObservedRunningTime="2026-01-26 09:34:49.55900789 +0000 UTC m=+1718.207679709" Jan 26 09:34:50 crc kubenswrapper[4827]: I0126 09:34:50.703101 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:34:50 crc kubenswrapper[4827]: E0126 09:34:50.703711 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:35:02 crc kubenswrapper[4827]: I0126 09:35:02.702502 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:35:02 crc kubenswrapper[4827]: E0126 09:35:02.704467 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:35:04 crc kubenswrapper[4827]: I0126 09:35:04.043340 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tjnkp"] Jan 26 09:35:04 crc kubenswrapper[4827]: I0126 09:35:04.053429 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tjnkp"] Jan 26 09:35:05 crc kubenswrapper[4827]: I0126 09:35:05.716029 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9f773f-e44d-4773-824f-dde5313c3c26" path="/var/lib/kubelet/pods/cb9f773f-e44d-4773-824f-dde5313c3c26/volumes" Jan 26 09:35:14 crc kubenswrapper[4827]: I0126 09:35:14.039108 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hm5j4"] Jan 26 09:35:14 crc kubenswrapper[4827]: I0126 09:35:14.050806 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hm5j4"] Jan 26 09:35:14 crc kubenswrapper[4827]: I0126 09:35:14.703254 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:35:14 crc kubenswrapper[4827]: E0126 09:35:14.704038 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:35:15 crc kubenswrapper[4827]: I0126 09:35:15.714786 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411be221-7c35-404e-9f79-7d5498fec92c" path="/var/lib/kubelet/pods/411be221-7c35-404e-9f79-7d5498fec92c/volumes" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.031384 4827 scope.go:117] "RemoveContainer" containerID="49f9062a2d93d9d163cd534025748ed6f8f02831b5f9332a96661688b0d9fa03" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.089384 4827 scope.go:117] "RemoveContainer" containerID="c29bf2bb2c49ad2b9a484db9c7fd7d8a730c515820a8833fb8486c474fd9cd0b" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.135441 4827 scope.go:117] "RemoveContainer" containerID="6a681098e011dcbdc20433aa2fd95d73e73ccb51f87c0ce0f7bdbc8723729c33" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.170506 4827 scope.go:117] "RemoveContainer" containerID="ecdd15148e8426f5eb56adfd2f83fa8d4461abe94fb559cd7ead41ebac5fc4d9" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.234575 4827 scope.go:117] "RemoveContainer" containerID="061d7e07c7f95da73bfbe962cfae08fba1a231fc1b653b6e25e09b468d261e8a" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.264570 4827 scope.go:117] "RemoveContainer" containerID="a6cd996dc778f0218de0a8800e717d3d095a508c2c51b6d3031f8dd5c2deeab5" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.311908 4827 scope.go:117] "RemoveContainer" containerID="6d422c6a9f1687d2d183b55253fad896196a9d2dd6819444da6c0d8308fdfc67" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.332103 4827 scope.go:117] "RemoveContainer" containerID="29ac3a728c102200cedf2ca549bae9dfbe01457148d20019cec2b82b5cc05f34" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.348632 4827 scope.go:117] "RemoveContainer" containerID="81d40b592e87126d4d2d8070bf2799e87fe6dd8a05fae31fb4bf3e854fd9d3d7" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.365342 4827 scope.go:117] "RemoveContainer" containerID="ba99c090fce3cd5b0b65e153f5e47e3bb52ba7ea93b81575ea88018eca5c766a" Jan 26 09:35:16 crc kubenswrapper[4827]: I0126 09:35:16.382382 4827 scope.go:117] "RemoveContainer" containerID="f2ee19780d1808d08dd895b212b88d071a4852853161b46b8639a718cd1346ac" Jan 26 09:35:26 crc kubenswrapper[4827]: I0126 09:35:26.032202 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s6fvd"] Jan 26 09:35:26 crc kubenswrapper[4827]: I0126 09:35:26.046240 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s6fvd"] Jan 26 09:35:26 crc kubenswrapper[4827]: I0126 09:35:26.703303 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:35:26 crc kubenswrapper[4827]: E0126 09:35:26.703956 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.032696 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bpxp5"] Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.040204 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lwd9n"] Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.050521 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lwd9n"] Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.058272 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bpxp5"] Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.714032 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed11b79-49ca-4b9a-9ebc-413bb8032271" path="/var/lib/kubelet/pods/6ed11b79-49ca-4b9a-9ebc-413bb8032271/volumes" Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.714776 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66" path="/var/lib/kubelet/pods/8a87d7c6-23a9-40dd-a0f9-3d29a9ecce66/volumes" Jan 26 09:35:27 crc kubenswrapper[4827]: I0126 09:35:27.715331 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0129b71-c166-4c4d-b8e9-c7f1f1acdd36" path="/var/lib/kubelet/pods/a0129b71-c166-4c4d-b8e9-c7f1f1acdd36/volumes" Jan 26 09:35:30 crc kubenswrapper[4827]: I0126 09:35:30.933088 4827 generic.go:334] "Generic (PLEG): container finished" podID="f543c02c-f09c-49aa-950c-d74789684e3a" containerID="62c5c8137a8e3c06974f5d220803dac864259f04135c8c8a3da672eb6c27710a" exitCode=0 Jan 26 09:35:30 crc kubenswrapper[4827]: I0126 09:35:30.933319 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" event={"ID":"f543c02c-f09c-49aa-950c-d74789684e3a","Type":"ContainerDied","Data":"62c5c8137a8e3c06974f5d220803dac864259f04135c8c8a3da672eb6c27710a"} Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.366300 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.390475 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam\") pod \"f543c02c-f09c-49aa-950c-d74789684e3a\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.390959 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory\") pod \"f543c02c-f09c-49aa-950c-d74789684e3a\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.391101 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb6wl\" (UniqueName: \"kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl\") pod \"f543c02c-f09c-49aa-950c-d74789684e3a\" (UID: \"f543c02c-f09c-49aa-950c-d74789684e3a\") " Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.397018 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl" (OuterVolumeSpecName: "kube-api-access-pb6wl") pod "f543c02c-f09c-49aa-950c-d74789684e3a" (UID: "f543c02c-f09c-49aa-950c-d74789684e3a"). InnerVolumeSpecName "kube-api-access-pb6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.423698 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f543c02c-f09c-49aa-950c-d74789684e3a" (UID: "f543c02c-f09c-49aa-950c-d74789684e3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.443036 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory" (OuterVolumeSpecName: "inventory") pod "f543c02c-f09c-49aa-950c-d74789684e3a" (UID: "f543c02c-f09c-49aa-950c-d74789684e3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.492530 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.492561 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb6wl\" (UniqueName: \"kubernetes.io/projected/f543c02c-f09c-49aa-950c-d74789684e3a-kube-api-access-pb6wl\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.492571 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f543c02c-f09c-49aa-950c-d74789684e3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.953387 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" event={"ID":"f543c02c-f09c-49aa-950c-d74789684e3a","Type":"ContainerDied","Data":"344b2de16afe01614f7f647d0af6056d573b741538c4a31ffe2ebf7382ea6031"} Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.953435 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344b2de16afe01614f7f647d0af6056d573b741538c4a31ffe2ebf7382ea6031" Jan 26 09:35:32 crc kubenswrapper[4827]: I0126 09:35:32.953480 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.046834 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r"] Jan 26 09:35:33 crc kubenswrapper[4827]: E0126 09:35:33.047285 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f543c02c-f09c-49aa-950c-d74789684e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.047309 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f543c02c-f09c-49aa-950c-d74789684e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.048272 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f543c02c-f09c-49aa-950c-d74789684e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.049681 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.057276 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.057356 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.057421 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.057515 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.062356 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r"] Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.103508 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.104138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qd7\" (UniqueName: \"kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.104403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.205678 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.206038 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6qd7\" (UniqueName: \"kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.206221 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.211253 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.211691 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.225087 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6qd7\" (UniqueName: \"kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.369271 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:33 crc kubenswrapper[4827]: I0126 09:35:33.986799 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r"] Jan 26 09:35:34 crc kubenswrapper[4827]: I0126 09:35:34.972564 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" event={"ID":"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87","Type":"ContainerStarted","Data":"b7c1fd544a6a55d447a4c64db71b16a4f9ab10f8a61e86d66618d6f94591ee89"} Jan 26 09:35:34 crc kubenswrapper[4827]: I0126 09:35:34.972889 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" event={"ID":"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87","Type":"ContainerStarted","Data":"291e2d2d8a64af0d84b9dc26ac6b9ea6c7102169fc073ed63d0d4e3c0f01cc98"} Jan 26 09:35:34 crc kubenswrapper[4827]: I0126 09:35:34.992432 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" podStartSLOduration=1.48961196 podStartE2EDuration="1.992416684s" podCreationTimestamp="2026-01-26 09:35:33 +0000 UTC" firstStartedPulling="2026-01-26 09:35:33.995705211 +0000 UTC m=+1762.644377030" lastFinishedPulling="2026-01-26 09:35:34.498509935 +0000 UTC m=+1763.147181754" observedRunningTime="2026-01-26 09:35:34.990709617 +0000 UTC m=+1763.639381436" watchObservedRunningTime="2026-01-26 09:35:34.992416684 +0000 UTC m=+1763.641088503" Jan 26 09:35:39 crc kubenswrapper[4827]: I0126 09:35:39.003488 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" containerID="b7c1fd544a6a55d447a4c64db71b16a4f9ab10f8a61e86d66618d6f94591ee89" exitCode=0 Jan 26 09:35:39 crc kubenswrapper[4827]: I0126 09:35:39.003591 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" event={"ID":"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87","Type":"ContainerDied","Data":"b7c1fd544a6a55d447a4c64db71b16a4f9ab10f8a61e86d66618d6f94591ee89"} Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.455166 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.544971 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory\") pod \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.545889 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam\") pod \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.545985 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6qd7\" (UniqueName: \"kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7\") pod \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\" (UID: \"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87\") " Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.554028 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7" (OuterVolumeSpecName: "kube-api-access-g6qd7") pod "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" (UID: "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87"). InnerVolumeSpecName "kube-api-access-g6qd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.577589 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" (UID: "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.584105 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory" (OuterVolumeSpecName: "inventory") pod "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" (UID: "9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.648024 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.648076 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:40 crc kubenswrapper[4827]: I0126 09:35:40.648089 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6qd7\" (UniqueName: \"kubernetes.io/projected/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87-kube-api-access-g6qd7\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.027753 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" event={"ID":"9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87","Type":"ContainerDied","Data":"291e2d2d8a64af0d84b9dc26ac6b9ea6c7102169fc073ed63d0d4e3c0f01cc98"} Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.027806 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291e2d2d8a64af0d84b9dc26ac6b9ea6c7102169fc073ed63d0d4e3c0f01cc98" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.027868 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.110234 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6"] Jan 26 09:35:41 crc kubenswrapper[4827]: E0126 09:35:41.110611 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.110655 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.110837 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.111459 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.117598 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.117916 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.117923 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.118003 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.122709 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6"] Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.157820 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jw2\" (UniqueName: \"kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.158200 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.158446 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.260766 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jw2\" (UniqueName: \"kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.260832 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.260890 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.265524 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.271757 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.282384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jw2\" (UniqueName: \"kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:41 crc kubenswrapper[4827]: I0126 09:35:41.435731 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:35:44 crc kubenswrapper[4827]: I0126 09:35:41.713685 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:35:44 crc kubenswrapper[4827]: E0126 09:35:41.714402 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:35:44 crc kubenswrapper[4827]: W0126 09:35:43.214185 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d02e6b2_0975_44c1_a096_12a0491ace24.slice/crio-cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83 WatchSource:0}: Error finding container cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83: Status 404 returned error can't find the container with id cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83 Jan 26 09:35:44 crc kubenswrapper[4827]: I0126 09:35:43.214864 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6"] Jan 26 09:35:44 crc kubenswrapper[4827]: I0126 09:35:44.050657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" event={"ID":"4d02e6b2-0975-44c1-a096-12a0491ace24","Type":"ContainerStarted","Data":"8b536170d98ced33307a229bbc7eba259c715d1d397d3c8d02459cc71da033f2"} Jan 26 09:35:44 crc kubenswrapper[4827]: I0126 09:35:44.050896 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" event={"ID":"4d02e6b2-0975-44c1-a096-12a0491ace24","Type":"ContainerStarted","Data":"cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83"} Jan 26 09:35:44 crc kubenswrapper[4827]: I0126 09:35:44.070404 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" podStartSLOduration=2.487168141 podStartE2EDuration="3.070383734s" podCreationTimestamp="2026-01-26 09:35:41 +0000 UTC" firstStartedPulling="2026-01-26 09:35:43.222162568 +0000 UTC m=+1771.870834397" lastFinishedPulling="2026-01-26 09:35:43.805378161 +0000 UTC m=+1772.454049990" observedRunningTime="2026-01-26 09:35:44.06696606 +0000 UTC m=+1772.715637879" watchObservedRunningTime="2026-01-26 09:35:44.070383734 +0000 UTC m=+1772.719055553" Jan 26 09:35:55 crc kubenswrapper[4827]: I0126 09:35:55.703432 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:35:55 crc kubenswrapper[4827]: E0126 09:35:55.705483 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:36:08 crc kubenswrapper[4827]: I0126 09:36:08.703550 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:36:08 crc kubenswrapper[4827]: E0126 09:36:08.704462 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:36:16 crc kubenswrapper[4827]: I0126 09:36:16.618463 4827 scope.go:117] "RemoveContainer" containerID="4eed3b95fbd98695b5c69b244c5fdc77296c222c3d1fc1cd0ad684fd7e0088d4" Jan 26 09:36:16 crc kubenswrapper[4827]: I0126 09:36:16.664387 4827 scope.go:117] "RemoveContainer" containerID="3d3e488a7d5c1fbe9209f942ad2577e1afbf38fe919014c568b91e0677f82a7b" Jan 26 09:36:16 crc kubenswrapper[4827]: I0126 09:36:16.709205 4827 scope.go:117] "RemoveContainer" containerID="3b11929874ddbef020edc6afdae1bde44c9b79b19b0af4893bcbe4797a5d5f90" Jan 26 09:36:19 crc kubenswrapper[4827]: I0126 09:36:19.702779 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:36:19 crc kubenswrapper[4827]: E0126 09:36:19.703624 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:36:20 crc kubenswrapper[4827]: I0126 09:36:20.041877 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fk7m4"] Jan 26 09:36:20 crc kubenswrapper[4827]: I0126 09:36:20.048914 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fk7m4"] Jan 26 09:36:20 crc kubenswrapper[4827]: I0126 09:36:20.059013 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vm454"] Jan 26 09:36:20 crc kubenswrapper[4827]: I0126 09:36:20.067181 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vm454"] Jan 26 09:36:21 crc kubenswrapper[4827]: I0126 09:36:21.029444 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f56d-account-create-update-74r6s"] Jan 26 09:36:21 crc kubenswrapper[4827]: I0126 09:36:21.044144 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f56d-account-create-update-74r6s"] Jan 26 09:36:21 crc kubenswrapper[4827]: I0126 09:36:21.713783 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869917b6-85c2-45fe-8bb2-1bb1bebed474" path="/var/lib/kubelet/pods/869917b6-85c2-45fe-8bb2-1bb1bebed474/volumes" Jan 26 09:36:21 crc kubenswrapper[4827]: I0126 09:36:21.714390 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891a9e82-f3d1-42d5-81c3-9d397421322a" path="/var/lib/kubelet/pods/891a9e82-f3d1-42d5-81c3-9d397421322a/volumes" Jan 26 09:36:21 crc kubenswrapper[4827]: I0126 09:36:21.715043 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b6bbd8-6b37-497c-a3e4-e576808ad689" path="/var/lib/kubelet/pods/d0b6bbd8-6b37-497c-a3e4-e576808ad689/volumes" Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.040341 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b376-account-create-update-tqdsd"] Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.053121 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6r5mx"] Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.067733 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7bf0-account-create-update-qxgg2"] Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.076671 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b376-account-create-update-tqdsd"] Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.084399 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7bf0-account-create-update-qxgg2"] Jan 26 09:36:22 crc kubenswrapper[4827]: I0126 09:36:22.092396 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6r5mx"] Jan 26 09:36:23 crc kubenswrapper[4827]: I0126 09:36:23.720846 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31de30bf-1b49-4768-8f78-7c3812d1f6f9" path="/var/lib/kubelet/pods/31de30bf-1b49-4768-8f78-7c3812d1f6f9/volumes" Jan 26 09:36:23 crc kubenswrapper[4827]: I0126 09:36:23.721710 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6510fbc8-65b5-4783-8992-66f4b0899cef" path="/var/lib/kubelet/pods/6510fbc8-65b5-4783-8992-66f4b0899cef/volumes" Jan 26 09:36:23 crc kubenswrapper[4827]: I0126 09:36:23.722348 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c811b170-ced1-481b-adf7-9167094df800" path="/var/lib/kubelet/pods/c811b170-ced1-481b-adf7-9167094df800/volumes" Jan 26 09:36:34 crc kubenswrapper[4827]: I0126 09:36:34.704324 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:36:34 crc kubenswrapper[4827]: E0126 09:36:34.705575 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:36:37 crc kubenswrapper[4827]: I0126 09:36:37.530721 4827 generic.go:334] "Generic (PLEG): container finished" podID="4d02e6b2-0975-44c1-a096-12a0491ace24" containerID="8b536170d98ced33307a229bbc7eba259c715d1d397d3c8d02459cc71da033f2" exitCode=0 Jan 26 09:36:37 crc kubenswrapper[4827]: I0126 09:36:37.530800 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" event={"ID":"4d02e6b2-0975-44c1-a096-12a0491ace24","Type":"ContainerDied","Data":"8b536170d98ced33307a229bbc7eba259c715d1d397d3c8d02459cc71da033f2"} Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.143929 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.329755 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory\") pod \"4d02e6b2-0975-44c1-a096-12a0491ace24\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.329955 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam\") pod \"4d02e6b2-0975-44c1-a096-12a0491ace24\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.329982 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jw2\" (UniqueName: \"kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2\") pod \"4d02e6b2-0975-44c1-a096-12a0491ace24\" (UID: \"4d02e6b2-0975-44c1-a096-12a0491ace24\") " Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.334623 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2" (OuterVolumeSpecName: "kube-api-access-v7jw2") pod "4d02e6b2-0975-44c1-a096-12a0491ace24" (UID: "4d02e6b2-0975-44c1-a096-12a0491ace24"). InnerVolumeSpecName "kube-api-access-v7jw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.360870 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d02e6b2-0975-44c1-a096-12a0491ace24" (UID: "4d02e6b2-0975-44c1-a096-12a0491ace24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.367158 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory" (OuterVolumeSpecName: "inventory") pod "4d02e6b2-0975-44c1-a096-12a0491ace24" (UID: "4d02e6b2-0975-44c1-a096-12a0491ace24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.431994 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.432027 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d02e6b2-0975-44c1-a096-12a0491ace24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.432045 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jw2\" (UniqueName: \"kubernetes.io/projected/4d02e6b2-0975-44c1-a096-12a0491ace24-kube-api-access-v7jw2\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.560993 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" event={"ID":"4d02e6b2-0975-44c1-a096-12a0491ace24","Type":"ContainerDied","Data":"cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83"} Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.561037 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad5b9476e5ff2f2edbea8d742b2d0eddb3b1ff685bb100f0a4a47619637fa83" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.561062 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.645801 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5l22x"] Jan 26 09:36:39 crc kubenswrapper[4827]: E0126 09:36:39.650220 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d02e6b2-0975-44c1-a096-12a0491ace24" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.650261 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d02e6b2-0975-44c1-a096-12a0491ace24" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.650499 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d02e6b2-0975-44c1-a096-12a0491ace24" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.651312 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.655580 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.655852 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.656323 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.656666 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.665446 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5l22x"] Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.839469 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.839602 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvh2g\" (UniqueName: \"kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.839763 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.941217 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.941316 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvh2g\" (UniqueName: \"kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.941379 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.945088 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.945438 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:39 crc kubenswrapper[4827]: I0126 09:36:39.968361 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvh2g\" (UniqueName: \"kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g\") pod \"ssh-known-hosts-edpm-deployment-5l22x\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:40 crc kubenswrapper[4827]: I0126 09:36:40.265918 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:40 crc kubenswrapper[4827]: I0126 09:36:40.859265 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5l22x"] Jan 26 09:36:41 crc kubenswrapper[4827]: I0126 09:36:41.576594 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" event={"ID":"d24bd1f9-4e6c-419d-b2b2-f3177fee4693","Type":"ContainerStarted","Data":"5afb6392fc4f347a653bc36373f1247f63b15a597e15daa0a985ded796d5acfe"} Jan 26 09:36:41 crc kubenswrapper[4827]: I0126 09:36:41.577750 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" event={"ID":"d24bd1f9-4e6c-419d-b2b2-f3177fee4693","Type":"ContainerStarted","Data":"dbfd3d0487d4deeec31f8986212c0f9b0dd826c6196ec3b3870d180e10645acd"} Jan 26 09:36:41 crc kubenswrapper[4827]: I0126 09:36:41.591680 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" podStartSLOduration=2.14977649 podStartE2EDuration="2.591659334s" podCreationTimestamp="2026-01-26 09:36:39 +0000 UTC" firstStartedPulling="2026-01-26 09:36:40.874587897 +0000 UTC m=+1829.523259716" lastFinishedPulling="2026-01-26 09:36:41.316470741 +0000 UTC m=+1829.965142560" observedRunningTime="2026-01-26 09:36:41.589211786 +0000 UTC m=+1830.237883605" watchObservedRunningTime="2026-01-26 09:36:41.591659334 +0000 UTC m=+1830.240331143" Jan 26 09:36:46 crc kubenswrapper[4827]: I0126 09:36:46.054461 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2qr8h"] Jan 26 09:36:46 crc kubenswrapper[4827]: I0126 09:36:46.070195 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2qr8h"] Jan 26 09:36:47 crc kubenswrapper[4827]: I0126 09:36:47.712967 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441be7a9-ebbe-420e-896e-f28eb3cdbe16" path="/var/lib/kubelet/pods/441be7a9-ebbe-420e-896e-f28eb3cdbe16/volumes" Jan 26 09:36:49 crc kubenswrapper[4827]: I0126 09:36:49.702906 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:36:49 crc kubenswrapper[4827]: E0126 09:36:49.704646 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:36:50 crc kubenswrapper[4827]: I0126 09:36:50.653952 4827 generic.go:334] "Generic (PLEG): container finished" podID="d24bd1f9-4e6c-419d-b2b2-f3177fee4693" containerID="5afb6392fc4f347a653bc36373f1247f63b15a597e15daa0a985ded796d5acfe" exitCode=0 Jan 26 09:36:50 crc kubenswrapper[4827]: I0126 09:36:50.654033 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" event={"ID":"d24bd1f9-4e6c-419d-b2b2-f3177fee4693","Type":"ContainerDied","Data":"5afb6392fc4f347a653bc36373f1247f63b15a597e15daa0a985ded796d5acfe"} Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.682870 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" event={"ID":"d24bd1f9-4e6c-419d-b2b2-f3177fee4693","Type":"ContainerDied","Data":"dbfd3d0487d4deeec31f8986212c0f9b0dd826c6196ec3b3870d180e10645acd"} Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.683328 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbfd3d0487d4deeec31f8986212c0f9b0dd826c6196ec3b3870d180e10645acd" Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.720557 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.919952 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvh2g\" (UniqueName: \"kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g\") pod \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.920092 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0\") pod \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.920240 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam\") pod \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\" (UID: \"d24bd1f9-4e6c-419d-b2b2-f3177fee4693\") " Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.939886 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g" (OuterVolumeSpecName: "kube-api-access-kvh2g") pod "d24bd1f9-4e6c-419d-b2b2-f3177fee4693" (UID: "d24bd1f9-4e6c-419d-b2b2-f3177fee4693"). InnerVolumeSpecName "kube-api-access-kvh2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.955529 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d24bd1f9-4e6c-419d-b2b2-f3177fee4693" (UID: "d24bd1f9-4e6c-419d-b2b2-f3177fee4693"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:53 crc kubenswrapper[4827]: I0126 09:36:53.955577 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d24bd1f9-4e6c-419d-b2b2-f3177fee4693" (UID: "d24bd1f9-4e6c-419d-b2b2-f3177fee4693"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.023146 4827 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.023427 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.023446 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvh2g\" (UniqueName: \"kubernetes.io/projected/d24bd1f9-4e6c-419d-b2b2-f3177fee4693-kube-api-access-kvh2g\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.691553 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5l22x" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.861895 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln"] Jan 26 09:36:54 crc kubenswrapper[4827]: E0126 09:36:54.862324 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24bd1f9-4e6c-419d-b2b2-f3177fee4693" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.862347 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24bd1f9-4e6c-419d-b2b2-f3177fee4693" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.862581 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24bd1f9-4e6c-419d-b2b2-f3177fee4693" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.863283 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.865910 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.866343 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.867812 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.877670 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:36:54 crc kubenswrapper[4827]: I0126 09:36:54.882023 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln"] Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.041398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.041525 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.041779 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwb9q\" (UniqueName: \"kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.143713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.144122 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.144346 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwb9q\" (UniqueName: \"kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.149453 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.154207 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.167630 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwb9q\" (UniqueName: \"kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pbqln\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.182881 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:36:55 crc kubenswrapper[4827]: I0126 09:36:55.739066 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln"] Jan 26 09:36:56 crc kubenswrapper[4827]: I0126 09:36:56.717476 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" event={"ID":"ee053ea1-a2b6-491a-8df3-caa4c6965566","Type":"ContainerStarted","Data":"64804b4d1ff1bdeefe7667d919a1dc5c0904b5ca8e2a4b6afef967da90b06539"} Jan 26 09:36:56 crc kubenswrapper[4827]: I0126 09:36:56.718150 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" event={"ID":"ee053ea1-a2b6-491a-8df3-caa4c6965566","Type":"ContainerStarted","Data":"4417a88f1f27c38d385d459a7763867d3db5a97274cf6db2684248828a1918d3"} Jan 26 09:36:56 crc kubenswrapper[4827]: I0126 09:36:56.758107 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" podStartSLOduration=2.386273041 podStartE2EDuration="2.75804361s" podCreationTimestamp="2026-01-26 09:36:54 +0000 UTC" firstStartedPulling="2026-01-26 09:36:55.746435666 +0000 UTC m=+1844.395107485" lastFinishedPulling="2026-01-26 09:36:56.118206235 +0000 UTC m=+1844.766878054" observedRunningTime="2026-01-26 09:36:56.742758828 +0000 UTC m=+1845.391430707" watchObservedRunningTime="2026-01-26 09:36:56.75804361 +0000 UTC m=+1845.406715459" Jan 26 09:37:04 crc kubenswrapper[4827]: I0126 09:37:04.704000 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:37:04 crc kubenswrapper[4827]: E0126 09:37:04.705128 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:37:04 crc kubenswrapper[4827]: I0126 09:37:04.786211 4827 generic.go:334] "Generic (PLEG): container finished" podID="ee053ea1-a2b6-491a-8df3-caa4c6965566" containerID="64804b4d1ff1bdeefe7667d919a1dc5c0904b5ca8e2a4b6afef967da90b06539" exitCode=0 Jan 26 09:37:04 crc kubenswrapper[4827]: I0126 09:37:04.786269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" event={"ID":"ee053ea1-a2b6-491a-8df3-caa4c6965566","Type":"ContainerDied","Data":"64804b4d1ff1bdeefe7667d919a1dc5c0904b5ca8e2a4b6afef967da90b06539"} Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.128288 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.240225 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwb9q\" (UniqueName: \"kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q\") pod \"ee053ea1-a2b6-491a-8df3-caa4c6965566\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.240591 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam\") pod \"ee053ea1-a2b6-491a-8df3-caa4c6965566\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.240815 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory\") pod \"ee053ea1-a2b6-491a-8df3-caa4c6965566\" (UID: \"ee053ea1-a2b6-491a-8df3-caa4c6965566\") " Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.266911 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q" (OuterVolumeSpecName: "kube-api-access-bwb9q") pod "ee053ea1-a2b6-491a-8df3-caa4c6965566" (UID: "ee053ea1-a2b6-491a-8df3-caa4c6965566"). InnerVolumeSpecName "kube-api-access-bwb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.269997 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory" (OuterVolumeSpecName: "inventory") pod "ee053ea1-a2b6-491a-8df3-caa4c6965566" (UID: "ee053ea1-a2b6-491a-8df3-caa4c6965566"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.272002 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee053ea1-a2b6-491a-8df3-caa4c6965566" (UID: "ee053ea1-a2b6-491a-8df3-caa4c6965566"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.342454 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.342484 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwb9q\" (UniqueName: \"kubernetes.io/projected/ee053ea1-a2b6-491a-8df3-caa4c6965566-kube-api-access-bwb9q\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.342494 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee053ea1-a2b6-491a-8df3-caa4c6965566-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.806073 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" event={"ID":"ee053ea1-a2b6-491a-8df3-caa4c6965566","Type":"ContainerDied","Data":"4417a88f1f27c38d385d459a7763867d3db5a97274cf6db2684248828a1918d3"} Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.806148 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4417a88f1f27c38d385d459a7763867d3db5a97274cf6db2684248828a1918d3" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.806254 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.897566 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q"] Jan 26 09:37:06 crc kubenswrapper[4827]: E0126 09:37:06.898250 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee053ea1-a2b6-491a-8df3-caa4c6965566" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.898280 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee053ea1-a2b6-491a-8df3-caa4c6965566" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.898669 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee053ea1-a2b6-491a-8df3-caa4c6965566" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.899870 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.903733 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.908245 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.908279 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.908954 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:37:06 crc kubenswrapper[4827]: I0126 09:37:06.919707 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q"] Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.055533 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.055788 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.055964 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkln\" (UniqueName: \"kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.157885 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.157961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.158003 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkln\" (UniqueName: \"kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.166804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.180908 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.182063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkln\" (UniqueName: \"kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.216705 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:07 crc kubenswrapper[4827]: W0126 09:37:07.747917 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod069c6d28_af14_44c4_8f9a_1215f8a9cd57.slice/crio-240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb WatchSource:0}: Error finding container 240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb: Status 404 returned error can't find the container with id 240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.748119 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q"] Jan 26 09:37:07 crc kubenswrapper[4827]: I0126 09:37:07.814211 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" event={"ID":"069c6d28-af14-44c4-8f9a-1215f8a9cd57","Type":"ContainerStarted","Data":"240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb"} Jan 26 09:37:08 crc kubenswrapper[4827]: I0126 09:37:08.828246 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" event={"ID":"069c6d28-af14-44c4-8f9a-1215f8a9cd57","Type":"ContainerStarted","Data":"88b92912b4e71af852fe0fa065c1a4d2a9592625ad248bc575fab754b579ec08"} Jan 26 09:37:08 crc kubenswrapper[4827]: I0126 09:37:08.849611 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" podStartSLOduration=2.40571126 podStartE2EDuration="2.849586158s" podCreationTimestamp="2026-01-26 09:37:06 +0000 UTC" firstStartedPulling="2026-01-26 09:37:07.749967585 +0000 UTC m=+1856.398639404" lastFinishedPulling="2026-01-26 09:37:08.193842483 +0000 UTC m=+1856.842514302" observedRunningTime="2026-01-26 09:37:08.845730492 +0000 UTC m=+1857.494402321" watchObservedRunningTime="2026-01-26 09:37:08.849586158 +0000 UTC m=+1857.498257987" Jan 26 09:37:09 crc kubenswrapper[4827]: I0126 09:37:09.036388 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6sqfn"] Jan 26 09:37:09 crc kubenswrapper[4827]: I0126 09:37:09.045871 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6sqfn"] Jan 26 09:37:09 crc kubenswrapper[4827]: I0126 09:37:09.714502 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3793491-d9a6-4c4a-ad5b-00818693d5fc" path="/var/lib/kubelet/pods/c3793491-d9a6-4c4a-ad5b-00818693d5fc/volumes" Jan 26 09:37:10 crc kubenswrapper[4827]: I0126 09:37:10.033671 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wj8g6"] Jan 26 09:37:10 crc kubenswrapper[4827]: I0126 09:37:10.044585 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wj8g6"] Jan 26 09:37:11 crc kubenswrapper[4827]: I0126 09:37:11.712972 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4762a9bd-9e83-4616-a70b-3c53f1d4147c" path="/var/lib/kubelet/pods/4762a9bd-9e83-4616-a70b-3c53f1d4147c/volumes" Jan 26 09:37:16 crc kubenswrapper[4827]: I0126 09:37:16.818952 4827 scope.go:117] "RemoveContainer" containerID="23775374a0bfefd753ecaa0d2cf7b6e9903e9066d7bf541cbd87abd9d881fd3d" Jan 26 09:37:16 crc kubenswrapper[4827]: I0126 09:37:16.857534 4827 scope.go:117] "RemoveContainer" containerID="e9d1f1368f2195858fcc84b4260fe68cec52093b8a0f62cec161da80ac3ef9f8" Jan 26 09:37:16 crc kubenswrapper[4827]: I0126 09:37:16.882500 4827 scope.go:117] "RemoveContainer" containerID="d48e0771f890e354685262eb2d234560519a8e19ed3f56050f0a070b6f3ef632" Jan 26 09:37:16 crc kubenswrapper[4827]: I0126 09:37:16.921065 4827 scope.go:117] "RemoveContainer" containerID="7eda53e307649f59fae9626e39b8f916a67a160ec3d0a2fa88f78a25d743db4e" Jan 26 09:37:16 crc kubenswrapper[4827]: I0126 09:37:16.958682 4827 scope.go:117] "RemoveContainer" containerID="58269286c65356587e5b8338fd4e53fd847b4318c08a05ad5f4d7500d717eb2e" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.002976 4827 scope.go:117] "RemoveContainer" containerID="21010fa8fb743688fcddf0dc21ba0f9179aff798e62e9dd41168e4396dfa9f16" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.056029 4827 scope.go:117] "RemoveContainer" containerID="409a77068fc0b1fafa033820c4ca45cc3d83b4e84b880e4b55b949b1a1a43f55" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.077043 4827 scope.go:117] "RemoveContainer" containerID="67571f445348db8200d8412b7b40a1dc5be9649cec2a4e038740b7e409335df8" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.110456 4827 scope.go:117] "RemoveContainer" containerID="6a40712e7a28a86e4dcfbd88a9e0480982fe67b0e37dc968f4300acaa0701ca3" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.702611 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.911932 4827 generic.go:334] "Generic (PLEG): container finished" podID="069c6d28-af14-44c4-8f9a-1215f8a9cd57" containerID="88b92912b4e71af852fe0fa065c1a4d2a9592625ad248bc575fab754b579ec08" exitCode=0 Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.912070 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" event={"ID":"069c6d28-af14-44c4-8f9a-1215f8a9cd57","Type":"ContainerDied","Data":"88b92912b4e71af852fe0fa065c1a4d2a9592625ad248bc575fab754b579ec08"} Jan 26 09:37:17 crc kubenswrapper[4827]: I0126 09:37:17.915296 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8"} Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.312297 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.501274 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam\") pod \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.502084 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory\") pod \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.502130 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qkln\" (UniqueName: \"kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln\") pod \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\" (UID: \"069c6d28-af14-44c4-8f9a-1215f8a9cd57\") " Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.509571 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln" (OuterVolumeSpecName: "kube-api-access-2qkln") pod "069c6d28-af14-44c4-8f9a-1215f8a9cd57" (UID: "069c6d28-af14-44c4-8f9a-1215f8a9cd57"). InnerVolumeSpecName "kube-api-access-2qkln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.534857 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "069c6d28-af14-44c4-8f9a-1215f8a9cd57" (UID: "069c6d28-af14-44c4-8f9a-1215f8a9cd57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.539543 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory" (OuterVolumeSpecName: "inventory") pod "069c6d28-af14-44c4-8f9a-1215f8a9cd57" (UID: "069c6d28-af14-44c4-8f9a-1215f8a9cd57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.604142 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.604180 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069c6d28-af14-44c4-8f9a-1215f8a9cd57-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.604193 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qkln\" (UniqueName: \"kubernetes.io/projected/069c6d28-af14-44c4-8f9a-1215f8a9cd57-kube-api-access-2qkln\") on node \"crc\" DevicePath \"\"" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.930802 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" event={"ID":"069c6d28-af14-44c4-8f9a-1215f8a9cd57","Type":"ContainerDied","Data":"240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb"} Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.931141 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240093ac7ca84d7ab9a1603866b3fc297da6ebe8a6d4068b4b5d41c576c2e4fb" Jan 26 09:37:19 crc kubenswrapper[4827]: I0126 09:37:19.930887 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q" Jan 26 09:37:55 crc kubenswrapper[4827]: I0126 09:37:55.038509 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kdm7c"] Jan 26 09:37:55 crc kubenswrapper[4827]: I0126 09:37:55.047699 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kdm7c"] Jan 26 09:37:55 crc kubenswrapper[4827]: I0126 09:37:55.713015 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f" path="/var/lib/kubelet/pods/cea72f1d-1aad-49c8-bcfe-dfb4ed1ee03f/volumes" Jan 26 09:38:17 crc kubenswrapper[4827]: I0126 09:38:17.286669 4827 scope.go:117] "RemoveContainer" containerID="a394349f3c58d2735008320353db15b7185ea433f1eb7665c30002fecc993db7" Jan 26 09:39:42 crc kubenswrapper[4827]: I0126 09:39:42.269339 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:39:42 crc kubenswrapper[4827]: I0126 09:39:42.270756 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.384260 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:39:56 crc kubenswrapper[4827]: E0126 09:39:56.385036 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069c6d28-af14-44c4-8f9a-1215f8a9cd57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.385048 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="069c6d28-af14-44c4-8f9a-1215f8a9cd57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.385194 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="069c6d28-af14-44c4-8f9a-1215f8a9cd57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.386437 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.403827 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.481903 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.482218 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzr8\" (UniqueName: \"kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.482395 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.584762 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.585041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzr8\" (UniqueName: \"kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.585177 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.586107 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.586121 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.610047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzr8\" (UniqueName: \"kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8\") pod \"redhat-operators-dpb89\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:56 crc kubenswrapper[4827]: I0126 09:39:56.705231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:39:57 crc kubenswrapper[4827]: I0126 09:39:57.175417 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:39:57 crc kubenswrapper[4827]: I0126 09:39:57.247954 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerStarted","Data":"2313dcdff089d3361f2091e6ad8d23bf93ad4302b32e19334a7f0619dde50fb8"} Jan 26 09:39:58 crc kubenswrapper[4827]: I0126 09:39:58.260828 4827 generic.go:334] "Generic (PLEG): container finished" podID="04a983e5-29b7-4971-95f0-99489f643962" containerID="c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976" exitCode=0 Jan 26 09:39:58 crc kubenswrapper[4827]: I0126 09:39:58.260896 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerDied","Data":"c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976"} Jan 26 09:39:58 crc kubenswrapper[4827]: I0126 09:39:58.263387 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:40:00 crc kubenswrapper[4827]: I0126 09:40:00.279256 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerStarted","Data":"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329"} Jan 26 09:40:03 crc kubenswrapper[4827]: I0126 09:40:03.306785 4827 generic.go:334] "Generic (PLEG): container finished" podID="04a983e5-29b7-4971-95f0-99489f643962" containerID="aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329" exitCode=0 Jan 26 09:40:03 crc kubenswrapper[4827]: I0126 09:40:03.306857 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerDied","Data":"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329"} Jan 26 09:40:04 crc kubenswrapper[4827]: I0126 09:40:04.314977 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerStarted","Data":"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26"} Jan 26 09:40:04 crc kubenswrapper[4827]: I0126 09:40:04.333611 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dpb89" podStartSLOduration=2.883901631 podStartE2EDuration="8.333592383s" podCreationTimestamp="2026-01-26 09:39:56 +0000 UTC" firstStartedPulling="2026-01-26 09:39:58.263004327 +0000 UTC m=+2026.911676166" lastFinishedPulling="2026-01-26 09:40:03.712695099 +0000 UTC m=+2032.361366918" observedRunningTime="2026-01-26 09:40:04.330074716 +0000 UTC m=+2032.978746535" watchObservedRunningTime="2026-01-26 09:40:04.333592383 +0000 UTC m=+2032.982264202" Jan 26 09:40:06 crc kubenswrapper[4827]: I0126 09:40:06.705558 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:06 crc kubenswrapper[4827]: I0126 09:40:06.706823 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:07 crc kubenswrapper[4827]: I0126 09:40:07.751560 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dpb89" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="registry-server" probeResult="failure" output=< Jan 26 09:40:07 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:40:07 crc kubenswrapper[4827]: > Jan 26 09:40:12 crc kubenswrapper[4827]: I0126 09:40:12.269021 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:40:12 crc kubenswrapper[4827]: I0126 09:40:12.269395 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:40:16 crc kubenswrapper[4827]: I0126 09:40:16.763790 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:16 crc kubenswrapper[4827]: I0126 09:40:16.820112 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:16 crc kubenswrapper[4827]: I0126 09:40:16.998372 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.416113 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dpb89" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="registry-server" containerID="cri-o://d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26" gracePeriod=2 Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.854207 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.895151 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities\") pod \"04a983e5-29b7-4971-95f0-99489f643962\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.895314 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbzr8\" (UniqueName: \"kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8\") pod \"04a983e5-29b7-4971-95f0-99489f643962\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.895460 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content\") pod \"04a983e5-29b7-4971-95f0-99489f643962\" (UID: \"04a983e5-29b7-4971-95f0-99489f643962\") " Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.896861 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities" (OuterVolumeSpecName: "utilities") pod "04a983e5-29b7-4971-95f0-99489f643962" (UID: "04a983e5-29b7-4971-95f0-99489f643962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.900361 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8" (OuterVolumeSpecName: "kube-api-access-jbzr8") pod "04a983e5-29b7-4971-95f0-99489f643962" (UID: "04a983e5-29b7-4971-95f0-99489f643962"). InnerVolumeSpecName "kube-api-access-jbzr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.997226 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:40:18 crc kubenswrapper[4827]: I0126 09:40:18.997429 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbzr8\" (UniqueName: \"kubernetes.io/projected/04a983e5-29b7-4971-95f0-99489f643962-kube-api-access-jbzr8\") on node \"crc\" DevicePath \"\"" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.018616 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04a983e5-29b7-4971-95f0-99489f643962" (UID: "04a983e5-29b7-4971-95f0-99489f643962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.099606 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a983e5-29b7-4971-95f0-99489f643962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.424787 4827 generic.go:334] "Generic (PLEG): container finished" podID="04a983e5-29b7-4971-95f0-99489f643962" containerID="d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26" exitCode=0 Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.424829 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpb89" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.424839 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerDied","Data":"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26"} Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.424884 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpb89" event={"ID":"04a983e5-29b7-4971-95f0-99489f643962","Type":"ContainerDied","Data":"2313dcdff089d3361f2091e6ad8d23bf93ad4302b32e19334a7f0619dde50fb8"} Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.424908 4827 scope.go:117] "RemoveContainer" containerID="d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.449558 4827 scope.go:117] "RemoveContainer" containerID="aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.465477 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.474558 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dpb89"] Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.492465 4827 scope.go:117] "RemoveContainer" containerID="c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.512785 4827 scope.go:117] "RemoveContainer" containerID="d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26" Jan 26 09:40:19 crc kubenswrapper[4827]: E0126 09:40:19.513513 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26\": container with ID starting with d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26 not found: ID does not exist" containerID="d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.513726 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26"} err="failed to get container status \"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26\": rpc error: code = NotFound desc = could not find container \"d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26\": container with ID starting with d056c00228de58cfd24c785f86450142c0adbaf1a62f727dd91dc89b4d06ca26 not found: ID does not exist" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.513839 4827 scope.go:117] "RemoveContainer" containerID="aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329" Jan 26 09:40:19 crc kubenswrapper[4827]: E0126 09:40:19.514623 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329\": container with ID starting with aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329 not found: ID does not exist" containerID="aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.514695 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329"} err="failed to get container status \"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329\": rpc error: code = NotFound desc = could not find container \"aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329\": container with ID starting with aa1731dc5692be947ed752123bb725b64c710461e15f1f64afd2253c037f1329 not found: ID does not exist" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.514726 4827 scope.go:117] "RemoveContainer" containerID="c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976" Jan 26 09:40:19 crc kubenswrapper[4827]: E0126 09:40:19.515039 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976\": container with ID starting with c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976 not found: ID does not exist" containerID="c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.515075 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976"} err="failed to get container status \"c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976\": rpc error: code = NotFound desc = could not find container \"c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976\": container with ID starting with c255d8bdc4dc69e89b3aed83d116211c6abce72f95d3a8fb6c2f34d229d0d976 not found: ID does not exist" Jan 26 09:40:19 crc kubenswrapper[4827]: I0126 09:40:19.713204 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a983e5-29b7-4971-95f0-99489f643962" path="/var/lib/kubelet/pods/04a983e5-29b7-4971-95f0-99489f643962/volumes" Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.269065 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.269706 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.269756 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.270508 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.270565 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8" gracePeriod=600 Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.614384 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8" exitCode=0 Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.614445 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8"} Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.614802 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e"} Jan 26 09:40:42 crc kubenswrapper[4827]: I0126 09:40:42.614831 4827 scope.go:117] "RemoveContainer" containerID="e0205ca1b659defa2d27660dd9178b21599c3cbaced16d89386d073ef2b0c702" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.588846 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:14 crc kubenswrapper[4827]: E0126 09:42:14.595246 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="extract-content" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.595291 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="extract-content" Jan 26 09:42:14 crc kubenswrapper[4827]: E0126 09:42:14.595314 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="extract-utilities" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.595324 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="extract-utilities" Jan 26 09:42:14 crc kubenswrapper[4827]: E0126 09:42:14.595354 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="registry-server" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.595364 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="registry-server" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.596332 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a983e5-29b7-4971-95f0-99489f643962" containerName="registry-server" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.599183 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.600968 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.711850 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.713207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.713440 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp29v\" (UniqueName: \"kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.814843 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp29v\" (UniqueName: \"kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.816995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.817453 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.817417 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.818251 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:14 crc kubenswrapper[4827]: I0126 09:42:14.849858 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp29v\" (UniqueName: \"kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v\") pod \"certified-operators-bwmnj\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:15 crc kubenswrapper[4827]: I0126 09:42:15.002319 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:15 crc kubenswrapper[4827]: I0126 09:42:15.503221 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:16 crc kubenswrapper[4827]: I0126 09:42:16.382908 4827 generic.go:334] "Generic (PLEG): container finished" podID="737e00e4-08e4-473c-b725-d2331993d30b" containerID="d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81" exitCode=0 Jan 26 09:42:16 crc kubenswrapper[4827]: I0126 09:42:16.382974 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerDied","Data":"d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81"} Jan 26 09:42:16 crc kubenswrapper[4827]: I0126 09:42:16.383189 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerStarted","Data":"ae9dbece328be7da37d3fd65f8ce50474f4c464bdd4c600245bc332f57c1c83c"} Jan 26 09:42:17 crc kubenswrapper[4827]: I0126 09:42:17.393079 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerStarted","Data":"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e"} Jan 26 09:42:18 crc kubenswrapper[4827]: I0126 09:42:18.406241 4827 generic.go:334] "Generic (PLEG): container finished" podID="737e00e4-08e4-473c-b725-d2331993d30b" containerID="182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e" exitCode=0 Jan 26 09:42:18 crc kubenswrapper[4827]: I0126 09:42:18.406297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerDied","Data":"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e"} Jan 26 09:42:19 crc kubenswrapper[4827]: I0126 09:42:19.424547 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerStarted","Data":"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13"} Jan 26 09:42:19 crc kubenswrapper[4827]: I0126 09:42:19.452513 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bwmnj" podStartSLOduration=3.022844601 podStartE2EDuration="5.452482566s" podCreationTimestamp="2026-01-26 09:42:14 +0000 UTC" firstStartedPulling="2026-01-26 09:42:16.385909995 +0000 UTC m=+2165.034581814" lastFinishedPulling="2026-01-26 09:42:18.81554796 +0000 UTC m=+2167.464219779" observedRunningTime="2026-01-26 09:42:19.445848814 +0000 UTC m=+2168.094520643" watchObservedRunningTime="2026-01-26 09:42:19.452482566 +0000 UTC m=+2168.101154385" Jan 26 09:42:25 crc kubenswrapper[4827]: I0126 09:42:25.002766 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:25 crc kubenswrapper[4827]: I0126 09:42:25.004701 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:25 crc kubenswrapper[4827]: I0126 09:42:25.070576 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:25 crc kubenswrapper[4827]: I0126 09:42:25.526410 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:25 crc kubenswrapper[4827]: I0126 09:42:25.580112 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:27 crc kubenswrapper[4827]: I0126 09:42:27.500130 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bwmnj" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="registry-server" containerID="cri-o://705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13" gracePeriod=2 Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.489710 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.510594 4827 generic.go:334] "Generic (PLEG): container finished" podID="737e00e4-08e4-473c-b725-d2331993d30b" containerID="705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13" exitCode=0 Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.510673 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwmnj" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.510664 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerDied","Data":"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13"} Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.510742 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwmnj" event={"ID":"737e00e4-08e4-473c-b725-d2331993d30b","Type":"ContainerDied","Data":"ae9dbece328be7da37d3fd65f8ce50474f4c464bdd4c600245bc332f57c1c83c"} Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.510773 4827 scope.go:117] "RemoveContainer" containerID="705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.539299 4827 scope.go:117] "RemoveContainer" containerID="182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.579945 4827 scope.go:117] "RemoveContainer" containerID="d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.601030 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp29v\" (UniqueName: \"kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v\") pod \"737e00e4-08e4-473c-b725-d2331993d30b\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.601148 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content\") pod \"737e00e4-08e4-473c-b725-d2331993d30b\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.601314 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities\") pod \"737e00e4-08e4-473c-b725-d2331993d30b\" (UID: \"737e00e4-08e4-473c-b725-d2331993d30b\") " Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.602200 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities" (OuterVolumeSpecName: "utilities") pod "737e00e4-08e4-473c-b725-d2331993d30b" (UID: "737e00e4-08e4-473c-b725-d2331993d30b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.610027 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v" (OuterVolumeSpecName: "kube-api-access-zp29v") pod "737e00e4-08e4-473c-b725-d2331993d30b" (UID: "737e00e4-08e4-473c-b725-d2331993d30b"). InnerVolumeSpecName "kube-api-access-zp29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.615823 4827 scope.go:117] "RemoveContainer" containerID="705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13" Jan 26 09:42:28 crc kubenswrapper[4827]: E0126 09:42:28.616445 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13\": container with ID starting with 705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13 not found: ID does not exist" containerID="705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.616485 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13"} err="failed to get container status \"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13\": rpc error: code = NotFound desc = could not find container \"705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13\": container with ID starting with 705c6a46b3cea131691a73a6285c721266fc7a3d6fabefe9947897557ef34b13 not found: ID does not exist" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.616676 4827 scope.go:117] "RemoveContainer" containerID="182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e" Jan 26 09:42:28 crc kubenswrapper[4827]: E0126 09:42:28.617443 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e\": container with ID starting with 182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e not found: ID does not exist" containerID="182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.617495 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e"} err="failed to get container status \"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e\": rpc error: code = NotFound desc = could not find container \"182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e\": container with ID starting with 182afb007cab1eef332f105456fc40797e98fd4bc2656d0e092b0870a9b3278e not found: ID does not exist" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.617529 4827 scope.go:117] "RemoveContainer" containerID="d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81" Jan 26 09:42:28 crc kubenswrapper[4827]: E0126 09:42:28.619080 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81\": container with ID starting with d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81 not found: ID does not exist" containerID="d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.619147 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81"} err="failed to get container status \"d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81\": rpc error: code = NotFound desc = could not find container \"d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81\": container with ID starting with d450b7635e117abe31b2c8f12133031e251234086fdd326bfe8a498027e99a81 not found: ID does not exist" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.653258 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "737e00e4-08e4-473c-b725-d2331993d30b" (UID: "737e00e4-08e4-473c-b725-d2331993d30b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.703101 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.703131 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp29v\" (UniqueName: \"kubernetes.io/projected/737e00e4-08e4-473c-b725-d2331993d30b-kube-api-access-zp29v\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.703161 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737e00e4-08e4-473c-b725-d2331993d30b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.842659 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:28 crc kubenswrapper[4827]: I0126 09:42:28.854336 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bwmnj"] Jan 26 09:42:29 crc kubenswrapper[4827]: I0126 09:42:29.711938 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737e00e4-08e4-473c-b725-d2331993d30b" path="/var/lib/kubelet/pods/737e00e4-08e4-473c-b725-d2331993d30b/volumes" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.950502 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:31 crc kubenswrapper[4827]: E0126 09:42:31.951241 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="registry-server" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.951260 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="registry-server" Jan 26 09:42:31 crc kubenswrapper[4827]: E0126 09:42:31.951299 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="extract-utilities" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.951307 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="extract-utilities" Jan 26 09:42:31 crc kubenswrapper[4827]: E0126 09:42:31.951328 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="extract-content" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.951336 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="extract-content" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.951548 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="737e00e4-08e4-473c-b725-d2331993d30b" containerName="registry-server" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.953039 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:31 crc kubenswrapper[4827]: I0126 09:42:31.972927 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.078946 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.079078 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg58k\" (UniqueName: \"kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.079098 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.181307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg58k\" (UniqueName: \"kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.181360 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.181470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.181918 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.182019 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.216034 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg58k\" (UniqueName: \"kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k\") pod \"redhat-marketplace-fz2k8\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.274514 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:32 crc kubenswrapper[4827]: I0126 09:42:32.725851 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:33 crc kubenswrapper[4827]: I0126 09:42:33.559575 4827 generic.go:334] "Generic (PLEG): container finished" podID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerID="c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b" exitCode=0 Jan 26 09:42:33 crc kubenswrapper[4827]: I0126 09:42:33.559836 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerDied","Data":"c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b"} Jan 26 09:42:33 crc kubenswrapper[4827]: I0126 09:42:33.560103 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerStarted","Data":"058c856cff57f5273f5f689fbc480bad57dd427826dd2e40de530a3570abffd2"} Jan 26 09:42:34 crc kubenswrapper[4827]: I0126 09:42:34.568025 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerStarted","Data":"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea"} Jan 26 09:42:35 crc kubenswrapper[4827]: I0126 09:42:35.578186 4827 generic.go:334] "Generic (PLEG): container finished" podID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerID="805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea" exitCode=0 Jan 26 09:42:35 crc kubenswrapper[4827]: I0126 09:42:35.578230 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerDied","Data":"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea"} Jan 26 09:42:36 crc kubenswrapper[4827]: I0126 09:42:36.587699 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerStarted","Data":"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b"} Jan 26 09:42:36 crc kubenswrapper[4827]: I0126 09:42:36.613258 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fz2k8" podStartSLOduration=2.9260864509999998 podStartE2EDuration="5.613239972s" podCreationTimestamp="2026-01-26 09:42:31 +0000 UTC" firstStartedPulling="2026-01-26 09:42:33.56509988 +0000 UTC m=+2182.213771749" lastFinishedPulling="2026-01-26 09:42:36.252253451 +0000 UTC m=+2184.900925270" observedRunningTime="2026-01-26 09:42:36.60735332 +0000 UTC m=+2185.256025139" watchObservedRunningTime="2026-01-26 09:42:36.613239972 +0000 UTC m=+2185.261911811" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.269221 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.269788 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.275577 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.276496 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.329894 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.689478 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:42 crc kubenswrapper[4827]: I0126 09:42:42.738070 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:44 crc kubenswrapper[4827]: I0126 09:42:44.656674 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fz2k8" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="registry-server" containerID="cri-o://0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b" gracePeriod=2 Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.634225 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.668675 4827 generic.go:334] "Generic (PLEG): container finished" podID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerID="0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b" exitCode=0 Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.668722 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerDied","Data":"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b"} Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.668752 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz2k8" event={"ID":"c63467b3-46a8-4580-bf66-3eb75d3eb235","Type":"ContainerDied","Data":"058c856cff57f5273f5f689fbc480bad57dd427826dd2e40de530a3570abffd2"} Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.668798 4827 scope.go:117] "RemoveContainer" containerID="0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.668937 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz2k8" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.702032 4827 scope.go:117] "RemoveContainer" containerID="805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.722940 4827 scope.go:117] "RemoveContainer" containerID="c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.760717 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities\") pod \"c63467b3-46a8-4580-bf66-3eb75d3eb235\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.760817 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content\") pod \"c63467b3-46a8-4580-bf66-3eb75d3eb235\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.760888 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg58k\" (UniqueName: \"kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k\") pod \"c63467b3-46a8-4580-bf66-3eb75d3eb235\" (UID: \"c63467b3-46a8-4580-bf66-3eb75d3eb235\") " Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.765423 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities" (OuterVolumeSpecName: "utilities") pod "c63467b3-46a8-4580-bf66-3eb75d3eb235" (UID: "c63467b3-46a8-4580-bf66-3eb75d3eb235"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.765766 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k" (OuterVolumeSpecName: "kube-api-access-wg58k") pod "c63467b3-46a8-4580-bf66-3eb75d3eb235" (UID: "c63467b3-46a8-4580-bf66-3eb75d3eb235"). InnerVolumeSpecName "kube-api-access-wg58k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.766856 4827 scope.go:117] "RemoveContainer" containerID="0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b" Jan 26 09:42:45 crc kubenswrapper[4827]: E0126 09:42:45.767294 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b\": container with ID starting with 0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b not found: ID does not exist" containerID="0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.767336 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b"} err="failed to get container status \"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b\": rpc error: code = NotFound desc = could not find container \"0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b\": container with ID starting with 0ace3d348a4ad4eeb515ff6454b059c229bbd38bd0c5b16d2c4400acdcc1364b not found: ID does not exist" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.767359 4827 scope.go:117] "RemoveContainer" containerID="805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea" Jan 26 09:42:45 crc kubenswrapper[4827]: E0126 09:42:45.767936 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea\": container with ID starting with 805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea not found: ID does not exist" containerID="805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.767977 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea"} err="failed to get container status \"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea\": rpc error: code = NotFound desc = could not find container \"805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea\": container with ID starting with 805b208353b2c810f85b54240d0b515ab4d28aa6580b1fed41ee3a159ff36eea not found: ID does not exist" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.768001 4827 scope.go:117] "RemoveContainer" containerID="c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b" Jan 26 09:42:45 crc kubenswrapper[4827]: E0126 09:42:45.768282 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b\": container with ID starting with c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b not found: ID does not exist" containerID="c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.768307 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b"} err="failed to get container status \"c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b\": rpc error: code = NotFound desc = could not find container \"c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b\": container with ID starting with c05e411de7e529c7597238461b04df41cd4e4dc90bac2148beefbd9d4c97c35b not found: ID does not exist" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.782232 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c63467b3-46a8-4580-bf66-3eb75d3eb235" (UID: "c63467b3-46a8-4580-bf66-3eb75d3eb235"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.863060 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.863094 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg58k\" (UniqueName: \"kubernetes.io/projected/c63467b3-46a8-4580-bf66-3eb75d3eb235-kube-api-access-wg58k\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:45 crc kubenswrapper[4827]: I0126 09:42:45.863104 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c63467b3-46a8-4580-bf66-3eb75d3eb235-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:46 crc kubenswrapper[4827]: I0126 09:42:46.003927 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:46 crc kubenswrapper[4827]: I0126 09:42:46.010754 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz2k8"] Jan 26 09:42:47 crc kubenswrapper[4827]: I0126 09:42:47.716786 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" path="/var/lib/kubelet/pods/c63467b3-46a8-4580-bf66-3eb75d3eb235/volumes" Jan 26 09:43:12 crc kubenswrapper[4827]: I0126 09:43:12.268944 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:43:12 crc kubenswrapper[4827]: I0126 09:43:12.269536 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.891672 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:31 crc kubenswrapper[4827]: E0126 09:43:31.893903 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="extract-utilities" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.894016 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="extract-utilities" Jan 26 09:43:31 crc kubenswrapper[4827]: E0126 09:43:31.894125 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="registry-server" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.894202 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="registry-server" Jan 26 09:43:31 crc kubenswrapper[4827]: E0126 09:43:31.894279 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="extract-content" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.894356 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="extract-content" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.894681 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63467b3-46a8-4580-bf66-3eb75d3eb235" containerName="registry-server" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.896282 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.912275 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.913137 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.913327 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:31 crc kubenswrapper[4827]: I0126 09:43:31.913361 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqtx\" (UniqueName: \"kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.015199 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.015273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqtx\" (UniqueName: \"kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.015412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.016370 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.016521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.042457 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqtx\" (UniqueName: \"kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx\") pod \"community-operators-sw47j\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.237631 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:32 crc kubenswrapper[4827]: I0126 09:43:32.791395 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:32 crc kubenswrapper[4827]: W0126 09:43:32.808307 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4afb33df_4ce3_423b_90c5_0e4187bcba31.slice/crio-8f6b316e924121cde581e2d406f581307ffaacdc827a9fa569fc05f42ef6b642 WatchSource:0}: Error finding container 8f6b316e924121cde581e2d406f581307ffaacdc827a9fa569fc05f42ef6b642: Status 404 returned error can't find the container with id 8f6b316e924121cde581e2d406f581307ffaacdc827a9fa569fc05f42ef6b642 Jan 26 09:43:33 crc kubenswrapper[4827]: I0126 09:43:33.080085 4827 generic.go:334] "Generic (PLEG): container finished" podID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerID="313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23" exitCode=0 Jan 26 09:43:33 crc kubenswrapper[4827]: I0126 09:43:33.080128 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerDied","Data":"313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23"} Jan 26 09:43:33 crc kubenswrapper[4827]: I0126 09:43:33.080157 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerStarted","Data":"8f6b316e924121cde581e2d406f581307ffaacdc827a9fa569fc05f42ef6b642"} Jan 26 09:43:34 crc kubenswrapper[4827]: I0126 09:43:34.090464 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerStarted","Data":"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1"} Jan 26 09:43:35 crc kubenswrapper[4827]: I0126 09:43:35.104849 4827 generic.go:334] "Generic (PLEG): container finished" podID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerID="25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1" exitCode=0 Jan 26 09:43:35 crc kubenswrapper[4827]: I0126 09:43:35.105147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerDied","Data":"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1"} Jan 26 09:43:36 crc kubenswrapper[4827]: I0126 09:43:36.114371 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerStarted","Data":"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4"} Jan 26 09:43:36 crc kubenswrapper[4827]: I0126 09:43:36.141549 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sw47j" podStartSLOduration=2.701317736 podStartE2EDuration="5.141525684s" podCreationTimestamp="2026-01-26 09:43:31 +0000 UTC" firstStartedPulling="2026-01-26 09:43:33.083613636 +0000 UTC m=+2241.732285455" lastFinishedPulling="2026-01-26 09:43:35.523821574 +0000 UTC m=+2244.172493403" observedRunningTime="2026-01-26 09:43:36.135321763 +0000 UTC m=+2244.783993582" watchObservedRunningTime="2026-01-26 09:43:36.141525684 +0000 UTC m=+2244.790197503" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.238421 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.239903 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.269065 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.269406 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.269458 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.270247 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.270316 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" gracePeriod=600 Jan 26 09:43:42 crc kubenswrapper[4827]: I0126 09:43:42.316280 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:42 crc kubenswrapper[4827]: E0126 09:43:42.465411 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.189378 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" exitCode=0 Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.190432 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e"} Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.190482 4827 scope.go:117] "RemoveContainer" containerID="1b3b42d41ee54adf7ba00f3e1d8add469f4c37b3dd87aae7084af55a5fed56f8" Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.190900 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:43:43 crc kubenswrapper[4827]: E0126 09:43:43.191150 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.278164 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:43 crc kubenswrapper[4827]: I0126 09:43:43.330699 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.212085 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sw47j" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="registry-server" containerID="cri-o://f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4" gracePeriod=2 Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.688817 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.871156 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities\") pod \"4afb33df-4ce3-423b-90c5-0e4187bcba31\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.871286 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqtx\" (UniqueName: \"kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx\") pod \"4afb33df-4ce3-423b-90c5-0e4187bcba31\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.871380 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content\") pod \"4afb33df-4ce3-423b-90c5-0e4187bcba31\" (UID: \"4afb33df-4ce3-423b-90c5-0e4187bcba31\") " Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.871967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities" (OuterVolumeSpecName: "utilities") pod "4afb33df-4ce3-423b-90c5-0e4187bcba31" (UID: "4afb33df-4ce3-423b-90c5-0e4187bcba31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.885797 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx" (OuterVolumeSpecName: "kube-api-access-4qqtx") pod "4afb33df-4ce3-423b-90c5-0e4187bcba31" (UID: "4afb33df-4ce3-423b-90c5-0e4187bcba31"). InnerVolumeSpecName "kube-api-access-4qqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.941389 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4afb33df-4ce3-423b-90c5-0e4187bcba31" (UID: "4afb33df-4ce3-423b-90c5-0e4187bcba31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.973601 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.973662 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqtx\" (UniqueName: \"kubernetes.io/projected/4afb33df-4ce3-423b-90c5-0e4187bcba31-kube-api-access-4qqtx\") on node \"crc\" DevicePath \"\"" Jan 26 09:43:45 crc kubenswrapper[4827]: I0126 09:43:45.973678 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb33df-4ce3-423b-90c5-0e4187bcba31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.231867 4827 generic.go:334] "Generic (PLEG): container finished" podID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerID="f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4" exitCode=0 Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.231920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerDied","Data":"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4"} Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.231931 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw47j" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.231956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw47j" event={"ID":"4afb33df-4ce3-423b-90c5-0e4187bcba31","Type":"ContainerDied","Data":"8f6b316e924121cde581e2d406f581307ffaacdc827a9fa569fc05f42ef6b642"} Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.231973 4827 scope.go:117] "RemoveContainer" containerID="f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.272291 4827 scope.go:117] "RemoveContainer" containerID="25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.272494 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.301390 4827 scope.go:117] "RemoveContainer" containerID="313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.327358 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sw47j"] Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.353284 4827 scope.go:117] "RemoveContainer" containerID="f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4" Jan 26 09:43:46 crc kubenswrapper[4827]: E0126 09:43:46.354940 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4\": container with ID starting with f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4 not found: ID does not exist" containerID="f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.354988 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4"} err="failed to get container status \"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4\": rpc error: code = NotFound desc = could not find container \"f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4\": container with ID starting with f4d0c110de1c472b6e6e16a725413b6a861ee5192aff9ec1a55c76ef1751acc4 not found: ID does not exist" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.355015 4827 scope.go:117] "RemoveContainer" containerID="25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1" Jan 26 09:43:46 crc kubenswrapper[4827]: E0126 09:43:46.355344 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1\": container with ID starting with 25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1 not found: ID does not exist" containerID="25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.355369 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1"} err="failed to get container status \"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1\": rpc error: code = NotFound desc = could not find container \"25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1\": container with ID starting with 25e6f26e044d77bd599daedbe4f4f50bbfaf52072f330a4a9714e1b89e54e9e1 not found: ID does not exist" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.355389 4827 scope.go:117] "RemoveContainer" containerID="313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23" Jan 26 09:43:46 crc kubenswrapper[4827]: E0126 09:43:46.355788 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23\": container with ID starting with 313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23 not found: ID does not exist" containerID="313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23" Jan 26 09:43:46 crc kubenswrapper[4827]: I0126 09:43:46.355810 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23"} err="failed to get container status \"313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23\": rpc error: code = NotFound desc = could not find container \"313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23\": container with ID starting with 313b6f53ee7ffd8f18f3e8e4b874f240139038776df98800cd44dc0d8e9adf23 not found: ID does not exist" Jan 26 09:43:47 crc kubenswrapper[4827]: I0126 09:43:47.715935 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" path="/var/lib/kubelet/pods/4afb33df-4ce3-423b-90c5-0e4187bcba31/volumes" Jan 26 09:43:54 crc kubenswrapper[4827]: I0126 09:43:54.703128 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:43:54 crc kubenswrapper[4827]: E0126 09:43:54.703974 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.801409 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.808940 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.818707 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.830960 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.836067 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.842412 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pbqln"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.848839 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zsz2r"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.856399 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9p6pd"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.861816 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.868743 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.891790 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nlxs5"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.900287 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5l22x"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.908212 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.915166 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tdk6j"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.924343 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.930446 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f7cp6"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.936942 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w6q65"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.945871 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g2sxq"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.952822 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5l22x"] Jan 26 09:43:59 crc kubenswrapper[4827]: I0126 09:43:59.960936 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-grz7q"] Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.716699 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069c6d28-af14-44c4-8f9a-1215f8a9cd57" path="/var/lib/kubelet/pods/069c6d28-af14-44c4-8f9a-1215f8a9cd57/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.718086 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d02e6b2-0975-44c1-a096-12a0491ace24" path="/var/lib/kubelet/pods/4d02e6b2-0975-44c1-a096-12a0491ace24/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.719111 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a032cd5-c61d-41a6-871b-998a68d29913" path="/var/lib/kubelet/pods/5a032cd5-c61d-41a6-871b-998a68d29913/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.720154 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f8f40b-ea88-4069-8e6c-f1729de76b8a" path="/var/lib/kubelet/pods/90f8f40b-ea88-4069-8e6c-f1729de76b8a/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.722718 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87" path="/var/lib/kubelet/pods/9fafdc29-4ae8-4f1e-8158-d54cc5f6fe87/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.724018 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8" path="/var/lib/kubelet/pods/c99a53d5-61a2-4ef0-b0fc-efc0dbabcbb8/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.725635 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2263311-624a-49ff-870a-14334cffbc56" path="/var/lib/kubelet/pods/d2263311-624a-49ff-870a-14334cffbc56/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.729866 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24bd1f9-4e6c-419d-b2b2-f3177fee4693" path="/var/lib/kubelet/pods/d24bd1f9-4e6c-419d-b2b2-f3177fee4693/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.730984 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee053ea1-a2b6-491a-8df3-caa4c6965566" path="/var/lib/kubelet/pods/ee053ea1-a2b6-491a-8df3-caa4c6965566/volumes" Jan 26 09:44:01 crc kubenswrapper[4827]: I0126 09:44:01.732214 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f543c02c-f09c-49aa-950c-d74789684e3a" path="/var/lib/kubelet/pods/f543c02c-f09c-49aa-950c-d74789684e3a/volumes" Jan 26 09:44:09 crc kubenswrapper[4827]: I0126 09:44:09.703709 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:44:09 crc kubenswrapper[4827]: E0126 09:44:09.704928 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.423936 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl"] Jan 26 09:44:13 crc kubenswrapper[4827]: E0126 09:44:13.424712 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="extract-content" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.424728 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="extract-content" Jan 26 09:44:13 crc kubenswrapper[4827]: E0126 09:44:13.424739 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="registry-server" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.424748 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="registry-server" Jan 26 09:44:13 crc kubenswrapper[4827]: E0126 09:44:13.424776 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="extract-utilities" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.424785 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="extract-utilities" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.425002 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb33df-4ce3-423b-90c5-0e4187bcba31" containerName="registry-server" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.425780 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.430181 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.430396 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.430548 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.430752 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.435474 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl"] Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.437838 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.501091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.501397 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.501490 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.501549 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnv98\" (UniqueName: \"kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.501653 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.602866 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.602935 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.602990 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.603023 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnv98\" (UniqueName: \"kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.603091 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.609008 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.609060 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.609889 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.610480 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.629167 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnv98\" (UniqueName: \"kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:13 crc kubenswrapper[4827]: I0126 09:44:13.746071 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:14 crc kubenswrapper[4827]: I0126 09:44:14.312619 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl"] Jan 26 09:44:14 crc kubenswrapper[4827]: I0126 09:44:14.457758 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" event={"ID":"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2","Type":"ContainerStarted","Data":"9dce622abb85f1c2f985bf088d83a9afd904530ca3c06bad48c29839beb5c7b9"} Jan 26 09:44:15 crc kubenswrapper[4827]: I0126 09:44:15.468324 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" event={"ID":"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2","Type":"ContainerStarted","Data":"be71d15140c43ead5bc77e9ba3ef7b8c683138e1b5961d081c5b9c88cf828e02"} Jan 26 09:44:15 crc kubenswrapper[4827]: I0126 09:44:15.494698 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" podStartSLOduration=2.037358068 podStartE2EDuration="2.494621222s" podCreationTimestamp="2026-01-26 09:44:13 +0000 UTC" firstStartedPulling="2026-01-26 09:44:14.31790167 +0000 UTC m=+2282.966573499" lastFinishedPulling="2026-01-26 09:44:14.775164834 +0000 UTC m=+2283.423836653" observedRunningTime="2026-01-26 09:44:15.484962566 +0000 UTC m=+2284.133634395" watchObservedRunningTime="2026-01-26 09:44:15.494621222 +0000 UTC m=+2284.143293041" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.512998 4827 scope.go:117] "RemoveContainer" containerID="5afb6392fc4f347a653bc36373f1247f63b15a597e15daa0a985ded796d5acfe" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.556065 4827 scope.go:117] "RemoveContainer" containerID="12aab131f1dc47605dbbc80b0ed0f97ba005b574ec162eed75a9bb3250d950f1" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.671566 4827 scope.go:117] "RemoveContainer" containerID="8b536170d98ced33307a229bbc7eba259c715d1d397d3c8d02459cc71da033f2" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.727672 4827 scope.go:117] "RemoveContainer" containerID="7b92c3f9106e34b5644403a334d94c2033f9028a3766ebb00d5fd3dbaba2a815" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.788691 4827 scope.go:117] "RemoveContainer" containerID="88b92912b4e71af852fe0fa065c1a4d2a9592625ad248bc575fab754b579ec08" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.834179 4827 scope.go:117] "RemoveContainer" containerID="cb5c94c31aaa9e0c26c141ef2f47551bb7994a9fafc1183daff236c74b7a472d" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.870010 4827 scope.go:117] "RemoveContainer" containerID="62c5c8137a8e3c06974f5d220803dac864259f04135c8c8a3da672eb6c27710a" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.938809 4827 scope.go:117] "RemoveContainer" containerID="64804b4d1ff1bdeefe7667d919a1dc5c0904b5ca8e2a4b6afef967da90b06539" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.966245 4827 scope.go:117] "RemoveContainer" containerID="b7c1fd544a6a55d447a4c64db71b16a4f9ab10f8a61e86d66618d6f94591ee89" Jan 26 09:44:17 crc kubenswrapper[4827]: I0126 09:44:17.993346 4827 scope.go:117] "RemoveContainer" containerID="08f1c46c3cf919c4718ec29260da95ea7347da0096f77291766f5527921e3f89" Jan 26 09:44:22 crc kubenswrapper[4827]: I0126 09:44:22.702906 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:44:22 crc kubenswrapper[4827]: E0126 09:44:22.703721 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:44:28 crc kubenswrapper[4827]: I0126 09:44:28.584076 4827 generic.go:334] "Generic (PLEG): container finished" podID="e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" containerID="be71d15140c43ead5bc77e9ba3ef7b8c683138e1b5961d081c5b9c88cf828e02" exitCode=0 Jan 26 09:44:28 crc kubenswrapper[4827]: I0126 09:44:28.584310 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" event={"ID":"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2","Type":"ContainerDied","Data":"be71d15140c43ead5bc77e9ba3ef7b8c683138e1b5961d081c5b9c88cf828e02"} Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.051813 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.176862 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam\") pod \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.176965 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph\") pod \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.177015 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle\") pod \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.177076 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnv98\" (UniqueName: \"kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98\") pod \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.177117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory\") pod \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\" (UID: \"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2\") " Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.183862 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" (UID: "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.183877 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98" (OuterVolumeSpecName: "kube-api-access-rnv98") pod "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" (UID: "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2"). InnerVolumeSpecName "kube-api-access-rnv98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.199697 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph" (OuterVolumeSpecName: "ceph") pod "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" (UID: "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.212262 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory" (OuterVolumeSpecName: "inventory") pod "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" (UID: "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.217510 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" (UID: "e5c7854f-b129-4e2a-9af1-ce45d61e1ae2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.278703 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.278738 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.278752 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnv98\" (UniqueName: \"kubernetes.io/projected/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-kube-api-access-rnv98\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.278764 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.278779 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c7854f-b129-4e2a-9af1-ce45d61e1ae2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.599275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" event={"ID":"e5c7854f-b129-4e2a-9af1-ce45d61e1ae2","Type":"ContainerDied","Data":"9dce622abb85f1c2f985bf088d83a9afd904530ca3c06bad48c29839beb5c7b9"} Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.599820 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dce622abb85f1c2f985bf088d83a9afd904530ca3c06bad48c29839beb5c7b9" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.599291 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.761052 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k"] Jan 26 09:44:30 crc kubenswrapper[4827]: E0126 09:44:30.761451 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.761468 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.761630 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c7854f-b129-4e2a-9af1-ce45d61e1ae2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.762241 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.765015 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.765136 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.765015 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.766490 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.766834 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.783569 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k"] Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.917979 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.918036 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.918077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.918136 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2679v\" (UniqueName: \"kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:30 crc kubenswrapper[4827]: I0126 09:44:30.918253 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.020950 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.021011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.021045 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2679v\" (UniqueName: \"kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.021170 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.021226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.039467 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.039842 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.040327 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.046864 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.054186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2679v\" (UniqueName: \"kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.077483 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:44:31 crc kubenswrapper[4827]: I0126 09:44:31.673395 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k"] Jan 26 09:44:32 crc kubenswrapper[4827]: I0126 09:44:32.615612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" event={"ID":"e2b5fccf-d108-4563-9e78-16e31b6959bf","Type":"ContainerStarted","Data":"6bee1a2896c6737ef7d28dca5e476067693fd2cdb66d9868413557429cc0351e"} Jan 26 09:44:33 crc kubenswrapper[4827]: I0126 09:44:33.623795 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" event={"ID":"e2b5fccf-d108-4563-9e78-16e31b6959bf","Type":"ContainerStarted","Data":"97f7120dcaff59f2d0e0cb47e394d97a0b88e1ea382bae14e46f48f24662cf2a"} Jan 26 09:44:33 crc kubenswrapper[4827]: I0126 09:44:33.647880 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" podStartSLOduration=2.506729752 podStartE2EDuration="3.647861114s" podCreationTimestamp="2026-01-26 09:44:30 +0000 UTC" firstStartedPulling="2026-01-26 09:44:31.69033745 +0000 UTC m=+2300.339009269" lastFinishedPulling="2026-01-26 09:44:32.831468802 +0000 UTC m=+2301.480140631" observedRunningTime="2026-01-26 09:44:33.646462756 +0000 UTC m=+2302.295134575" watchObservedRunningTime="2026-01-26 09:44:33.647861114 +0000 UTC m=+2302.296532933" Jan 26 09:44:37 crc kubenswrapper[4827]: I0126 09:44:37.703365 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:44:37 crc kubenswrapper[4827]: E0126 09:44:37.704113 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:44:49 crc kubenswrapper[4827]: I0126 09:44:49.704074 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:44:49 crc kubenswrapper[4827]: E0126 09:44:49.706008 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.157584 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg"] Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.164836 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.169321 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.169360 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.183703 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg"] Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.262935 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.263256 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgx5k\" (UniqueName: \"kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.263287 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.365455 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.365528 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgx5k\" (UniqueName: \"kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.365554 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.366529 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.371771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.386826 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgx5k\" (UniqueName: \"kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k\") pod \"collect-profiles-29490345-8z2rg\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.500050 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:00 crc kubenswrapper[4827]: I0126 09:45:00.963139 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg"] Jan 26 09:45:01 crc kubenswrapper[4827]: I0126 09:45:01.710208 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:45:01 crc kubenswrapper[4827]: E0126 09:45:01.710936 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:45:01 crc kubenswrapper[4827]: I0126 09:45:01.971678 4827 generic.go:334] "Generic (PLEG): container finished" podID="aa65d1e5-d891-43e0-a7a4-77decb5e06ce" containerID="fc130b88ccffaf3debb786f3e21cf52ef0f64753d036c015da4bc170bec7d8ad" exitCode=0 Jan 26 09:45:01 crc kubenswrapper[4827]: I0126 09:45:01.971733 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" event={"ID":"aa65d1e5-d891-43e0-a7a4-77decb5e06ce","Type":"ContainerDied","Data":"fc130b88ccffaf3debb786f3e21cf52ef0f64753d036c015da4bc170bec7d8ad"} Jan 26 09:45:01 crc kubenswrapper[4827]: I0126 09:45:01.971762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" event={"ID":"aa65d1e5-d891-43e0-a7a4-77decb5e06ce","Type":"ContainerStarted","Data":"6c8f381ee7d7a237f9d85d3f5342e673c1aa85570979826300b8e929f7ca4c37"} Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.260921 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.323088 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgx5k\" (UniqueName: \"kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k\") pod \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.323398 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume\") pod \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.323437 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume\") pod \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\" (UID: \"aa65d1e5-d891-43e0-a7a4-77decb5e06ce\") " Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.324082 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa65d1e5-d891-43e0-a7a4-77decb5e06ce" (UID: "aa65d1e5-d891-43e0-a7a4-77decb5e06ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.329431 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa65d1e5-d891-43e0-a7a4-77decb5e06ce" (UID: "aa65d1e5-d891-43e0-a7a4-77decb5e06ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.331162 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k" (OuterVolumeSpecName: "kube-api-access-lgx5k") pod "aa65d1e5-d891-43e0-a7a4-77decb5e06ce" (UID: "aa65d1e5-d891-43e0-a7a4-77decb5e06ce"). InnerVolumeSpecName "kube-api-access-lgx5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.425001 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.425041 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.425051 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgx5k\" (UniqueName: \"kubernetes.io/projected/aa65d1e5-d891-43e0-a7a4-77decb5e06ce-kube-api-access-lgx5k\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.989809 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.989804 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg" event={"ID":"aa65d1e5-d891-43e0-a7a4-77decb5e06ce","Type":"ContainerDied","Data":"6c8f381ee7d7a237f9d85d3f5342e673c1aa85570979826300b8e929f7ca4c37"} Jan 26 09:45:03 crc kubenswrapper[4827]: I0126 09:45:03.990047 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c8f381ee7d7a237f9d85d3f5342e673c1aa85570979826300b8e929f7ca4c37" Jan 26 09:45:04 crc kubenswrapper[4827]: I0126 09:45:04.353342 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb"] Jan 26 09:45:04 crc kubenswrapper[4827]: I0126 09:45:04.361472 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-vd2hb"] Jan 26 09:45:05 crc kubenswrapper[4827]: I0126 09:45:05.715329 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a" path="/var/lib/kubelet/pods/163682a7-ad3b-42e3-aa8c-5ffdfcc90c8a/volumes" Jan 26 09:45:12 crc kubenswrapper[4827]: I0126 09:45:12.703582 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:45:12 crc kubenswrapper[4827]: E0126 09:45:12.704353 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:45:18 crc kubenswrapper[4827]: I0126 09:45:18.171178 4827 scope.go:117] "RemoveContainer" containerID="b7dea8cbea12b61836f220f41d0e2f3dfddbadc1b787a2b8503ca4eb2715bfb7" Jan 26 09:45:27 crc kubenswrapper[4827]: I0126 09:45:27.703318 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:45:27 crc kubenswrapper[4827]: E0126 09:45:27.704157 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:45:38 crc kubenswrapper[4827]: I0126 09:45:38.702844 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:45:38 crc kubenswrapper[4827]: E0126 09:45:38.705372 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:45:52 crc kubenswrapper[4827]: I0126 09:45:52.703432 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:45:52 crc kubenswrapper[4827]: E0126 09:45:52.704366 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:05 crc kubenswrapper[4827]: I0126 09:46:05.702922 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:46:05 crc kubenswrapper[4827]: E0126 09:46:05.703733 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:16 crc kubenswrapper[4827]: I0126 09:46:16.593215 4827 generic.go:334] "Generic (PLEG): container finished" podID="e2b5fccf-d108-4563-9e78-16e31b6959bf" containerID="97f7120dcaff59f2d0e0cb47e394d97a0b88e1ea382bae14e46f48f24662cf2a" exitCode=0 Jan 26 09:46:16 crc kubenswrapper[4827]: I0126 09:46:16.593315 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" event={"ID":"e2b5fccf-d108-4563-9e78-16e31b6959bf","Type":"ContainerDied","Data":"97f7120dcaff59f2d0e0cb47e394d97a0b88e1ea382bae14e46f48f24662cf2a"} Jan 26 09:46:16 crc kubenswrapper[4827]: I0126 09:46:16.702804 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:46:16 crc kubenswrapper[4827]: E0126 09:46:16.703589 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.166516 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.243612 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam\") pod \"e2b5fccf-d108-4563-9e78-16e31b6959bf\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.243881 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory\") pod \"e2b5fccf-d108-4563-9e78-16e31b6959bf\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.243947 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle\") pod \"e2b5fccf-d108-4563-9e78-16e31b6959bf\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.244877 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph\") pod \"e2b5fccf-d108-4563-9e78-16e31b6959bf\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.245001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2679v\" (UniqueName: \"kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v\") pod \"e2b5fccf-d108-4563-9e78-16e31b6959bf\" (UID: \"e2b5fccf-d108-4563-9e78-16e31b6959bf\") " Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.249474 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v" (OuterVolumeSpecName: "kube-api-access-2679v") pod "e2b5fccf-d108-4563-9e78-16e31b6959bf" (UID: "e2b5fccf-d108-4563-9e78-16e31b6959bf"). InnerVolumeSpecName "kube-api-access-2679v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.250163 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph" (OuterVolumeSpecName: "ceph") pod "e2b5fccf-d108-4563-9e78-16e31b6959bf" (UID: "e2b5fccf-d108-4563-9e78-16e31b6959bf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.261967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e2b5fccf-d108-4563-9e78-16e31b6959bf" (UID: "e2b5fccf-d108-4563-9e78-16e31b6959bf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.275926 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2b5fccf-d108-4563-9e78-16e31b6959bf" (UID: "e2b5fccf-d108-4563-9e78-16e31b6959bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.289859 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory" (OuterVolumeSpecName: "inventory") pod "e2b5fccf-d108-4563-9e78-16e31b6959bf" (UID: "e2b5fccf-d108-4563-9e78-16e31b6959bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.350559 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.350589 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.350599 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.350607 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2679v\" (UniqueName: \"kubernetes.io/projected/e2b5fccf-d108-4563-9e78-16e31b6959bf-kube-api-access-2679v\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.350615 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2b5fccf-d108-4563-9e78-16e31b6959bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.612407 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" event={"ID":"e2b5fccf-d108-4563-9e78-16e31b6959bf","Type":"ContainerDied","Data":"6bee1a2896c6737ef7d28dca5e476067693fd2cdb66d9868413557429cc0351e"} Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.612460 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bee1a2896c6737ef7d28dca5e476067693fd2cdb66d9868413557429cc0351e" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.612781 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.740853 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26"] Jan 26 09:46:18 crc kubenswrapper[4827]: E0126 09:46:18.741827 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b5fccf-d108-4563-9e78-16e31b6959bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.742033 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b5fccf-d108-4563-9e78-16e31b6959bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:18 crc kubenswrapper[4827]: E0126 09:46:18.742053 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa65d1e5-d891-43e0-a7a4-77decb5e06ce" containerName="collect-profiles" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.742062 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa65d1e5-d891-43e0-a7a4-77decb5e06ce" containerName="collect-profiles" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.742295 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b5fccf-d108-4563-9e78-16e31b6959bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.742326 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa65d1e5-d891-43e0-a7a4-77decb5e06ce" containerName="collect-profiles" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.743070 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.750175 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.750239 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.750901 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.752200 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.760340 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.771810 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26"] Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.859344 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.859427 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.859538 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.859591 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87f8j\" (UniqueName: \"kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.961542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.961692 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.961729 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87f8j\" (UniqueName: \"kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.961817 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.967364 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.970226 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.971211 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:18 crc kubenswrapper[4827]: I0126 09:46:18.986702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87f8j\" (UniqueName: \"kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6x26\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:19 crc kubenswrapper[4827]: I0126 09:46:19.060610 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:19 crc kubenswrapper[4827]: I0126 09:46:19.566730 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26"] Jan 26 09:46:19 crc kubenswrapper[4827]: I0126 09:46:19.574594 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:46:19 crc kubenswrapper[4827]: I0126 09:46:19.620956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" event={"ID":"dffac4b8-657b-40f3-86cc-6138f70d889b","Type":"ContainerStarted","Data":"c8b57a25740fefd49c9442fca1e00fb78e6b465dfd381683a2c3681ed2add75d"} Jan 26 09:46:20 crc kubenswrapper[4827]: I0126 09:46:20.628783 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" event={"ID":"dffac4b8-657b-40f3-86cc-6138f70d889b","Type":"ContainerStarted","Data":"efadff547fa7b687b00d9de6d12dbaea9baba2d0e72356ea1334a432fc5920e2"} Jan 26 09:46:20 crc kubenswrapper[4827]: I0126 09:46:20.655836 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" podStartSLOduration=2.15304681 podStartE2EDuration="2.655814291s" podCreationTimestamp="2026-01-26 09:46:18 +0000 UTC" firstStartedPulling="2026-01-26 09:46:19.574064519 +0000 UTC m=+2408.222736338" lastFinishedPulling="2026-01-26 09:46:20.07683199 +0000 UTC m=+2408.725503819" observedRunningTime="2026-01-26 09:46:20.650804334 +0000 UTC m=+2409.299476153" watchObservedRunningTime="2026-01-26 09:46:20.655814291 +0000 UTC m=+2409.304486110" Jan 26 09:46:30 crc kubenswrapper[4827]: I0126 09:46:30.703501 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:46:30 crc kubenswrapper[4827]: E0126 09:46:30.704224 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:44 crc kubenswrapper[4827]: I0126 09:46:44.704612 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:46:44 crc kubenswrapper[4827]: E0126 09:46:44.705485 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:48 crc kubenswrapper[4827]: I0126 09:46:48.873915 4827 generic.go:334] "Generic (PLEG): container finished" podID="dffac4b8-657b-40f3-86cc-6138f70d889b" containerID="efadff547fa7b687b00d9de6d12dbaea9baba2d0e72356ea1334a432fc5920e2" exitCode=0 Jan 26 09:46:48 crc kubenswrapper[4827]: I0126 09:46:48.874932 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" event={"ID":"dffac4b8-657b-40f3-86cc-6138f70d889b","Type":"ContainerDied","Data":"efadff547fa7b687b00d9de6d12dbaea9baba2d0e72356ea1334a432fc5920e2"} Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.440510 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.601829 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87f8j\" (UniqueName: \"kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j\") pod \"dffac4b8-657b-40f3-86cc-6138f70d889b\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.601974 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam\") pod \"dffac4b8-657b-40f3-86cc-6138f70d889b\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.602042 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph\") pod \"dffac4b8-657b-40f3-86cc-6138f70d889b\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.602125 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory\") pod \"dffac4b8-657b-40f3-86cc-6138f70d889b\" (UID: \"dffac4b8-657b-40f3-86cc-6138f70d889b\") " Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.609923 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j" (OuterVolumeSpecName: "kube-api-access-87f8j") pod "dffac4b8-657b-40f3-86cc-6138f70d889b" (UID: "dffac4b8-657b-40f3-86cc-6138f70d889b"). InnerVolumeSpecName "kube-api-access-87f8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.610264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph" (OuterVolumeSpecName: "ceph") pod "dffac4b8-657b-40f3-86cc-6138f70d889b" (UID: "dffac4b8-657b-40f3-86cc-6138f70d889b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.631725 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory" (OuterVolumeSpecName: "inventory") pod "dffac4b8-657b-40f3-86cc-6138f70d889b" (UID: "dffac4b8-657b-40f3-86cc-6138f70d889b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.634440 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dffac4b8-657b-40f3-86cc-6138f70d889b" (UID: "dffac4b8-657b-40f3-86cc-6138f70d889b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.704257 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.704321 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87f8j\" (UniqueName: \"kubernetes.io/projected/dffac4b8-657b-40f3-86cc-6138f70d889b-kube-api-access-87f8j\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.704336 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.704347 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dffac4b8-657b-40f3-86cc-6138f70d889b-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.889962 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" event={"ID":"dffac4b8-657b-40f3-86cc-6138f70d889b","Type":"ContainerDied","Data":"c8b57a25740fefd49c9442fca1e00fb78e6b465dfd381683a2c3681ed2add75d"} Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.890221 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b57a25740fefd49c9442fca1e00fb78e6b465dfd381683a2c3681ed2add75d" Jan 26 09:46:50 crc kubenswrapper[4827]: I0126 09:46:50.890417 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6x26" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.001924 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk"] Jan 26 09:46:51 crc kubenswrapper[4827]: E0126 09:46:51.002349 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffac4b8-657b-40f3-86cc-6138f70d889b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.002379 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffac4b8-657b-40f3-86cc-6138f70d889b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.002603 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffac4b8-657b-40f3-86cc-6138f70d889b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.003312 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.012630 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.012895 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.013088 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.013109 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.013207 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.022774 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk"] Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.115341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwgg\" (UniqueName: \"kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.115976 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.116121 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.116178 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.217823 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.217906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.217953 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.218041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwgg\" (UniqueName: \"kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.223670 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.223757 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.225295 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.239358 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwgg\" (UniqueName: \"kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.330656 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.694715 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk"] Jan 26 09:46:51 crc kubenswrapper[4827]: I0126 09:46:51.898121 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" event={"ID":"448918db-8118-4738-aeed-81ba5f247cbb","Type":"ContainerStarted","Data":"ec7f57be8c23cba080a99bf9fc861cdcf63658be2a006a6fc7cfbe310d238590"} Jan 26 09:46:52 crc kubenswrapper[4827]: I0126 09:46:52.911040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" event={"ID":"448918db-8118-4738-aeed-81ba5f247cbb","Type":"ContainerStarted","Data":"73de723361cb5860e3338a117d2ead927cab41356e1db582baa564910010951e"} Jan 26 09:46:52 crc kubenswrapper[4827]: I0126 09:46:52.929139 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" podStartSLOduration=2.490791696 podStartE2EDuration="2.929122569s" podCreationTimestamp="2026-01-26 09:46:50 +0000 UTC" firstStartedPulling="2026-01-26 09:46:51.702829109 +0000 UTC m=+2440.351500928" lastFinishedPulling="2026-01-26 09:46:52.141159982 +0000 UTC m=+2440.789831801" observedRunningTime="2026-01-26 09:46:52.92735379 +0000 UTC m=+2441.576025599" watchObservedRunningTime="2026-01-26 09:46:52.929122569 +0000 UTC m=+2441.577794388" Jan 26 09:46:56 crc kubenswrapper[4827]: I0126 09:46:56.703169 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:46:56 crc kubenswrapper[4827]: E0126 09:46:56.704049 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:46:57 crc kubenswrapper[4827]: I0126 09:46:57.947952 4827 generic.go:334] "Generic (PLEG): container finished" podID="448918db-8118-4738-aeed-81ba5f247cbb" containerID="73de723361cb5860e3338a117d2ead927cab41356e1db582baa564910010951e" exitCode=0 Jan 26 09:46:57 crc kubenswrapper[4827]: I0126 09:46:57.948035 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" event={"ID":"448918db-8118-4738-aeed-81ba5f247cbb","Type":"ContainerDied","Data":"73de723361cb5860e3338a117d2ead927cab41356e1db582baa564910010951e"} Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.322800 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.386008 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory\") pod \"448918db-8118-4738-aeed-81ba5f247cbb\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.386058 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam\") pod \"448918db-8118-4738-aeed-81ba5f247cbb\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.386090 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph\") pod \"448918db-8118-4738-aeed-81ba5f247cbb\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.386116 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jwgg\" (UniqueName: \"kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg\") pod \"448918db-8118-4738-aeed-81ba5f247cbb\" (UID: \"448918db-8118-4738-aeed-81ba5f247cbb\") " Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.394451 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph" (OuterVolumeSpecName: "ceph") pod "448918db-8118-4738-aeed-81ba5f247cbb" (UID: "448918db-8118-4738-aeed-81ba5f247cbb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.405245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg" (OuterVolumeSpecName: "kube-api-access-7jwgg") pod "448918db-8118-4738-aeed-81ba5f247cbb" (UID: "448918db-8118-4738-aeed-81ba5f247cbb"). InnerVolumeSpecName "kube-api-access-7jwgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.414874 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory" (OuterVolumeSpecName: "inventory") pod "448918db-8118-4738-aeed-81ba5f247cbb" (UID: "448918db-8118-4738-aeed-81ba5f247cbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.439353 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "448918db-8118-4738-aeed-81ba5f247cbb" (UID: "448918db-8118-4738-aeed-81ba5f247cbb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.488047 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.488507 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.488744 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/448918db-8118-4738-aeed-81ba5f247cbb-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.488815 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jwgg\" (UniqueName: \"kubernetes.io/projected/448918db-8118-4738-aeed-81ba5f247cbb-kube-api-access-7jwgg\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.963021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" event={"ID":"448918db-8118-4738-aeed-81ba5f247cbb","Type":"ContainerDied","Data":"ec7f57be8c23cba080a99bf9fc861cdcf63658be2a006a6fc7cfbe310d238590"} Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.963073 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7f57be8c23cba080a99bf9fc861cdcf63658be2a006a6fc7cfbe310d238590" Jan 26 09:46:59 crc kubenswrapper[4827]: I0126 09:46:59.963074 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.041743 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb"] Jan 26 09:47:00 crc kubenswrapper[4827]: E0126 09:47:00.042189 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448918db-8118-4738-aeed-81ba5f247cbb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.042214 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="448918db-8118-4738-aeed-81ba5f247cbb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.042427 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="448918db-8118-4738-aeed-81ba5f247cbb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.043165 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.046559 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.046808 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.047191 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.047373 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.055228 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb"] Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.055700 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.099484 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.099576 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsk5g\" (UniqueName: \"kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.099724 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.099746 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.201931 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.201988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.202105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.202149 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsk5g\" (UniqueName: \"kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.207125 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.208300 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.209144 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.216590 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsk5g\" (UniqueName: \"kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gffwb\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.419623 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.960467 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb"] Jan 26 09:47:00 crc kubenswrapper[4827]: I0126 09:47:00.972935 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" event={"ID":"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5","Type":"ContainerStarted","Data":"f7853114344ca187d79f1ee93a6dc67c61891a6c2af3bdca5ac431ea4d3709bb"} Jan 26 09:47:01 crc kubenswrapper[4827]: I0126 09:47:01.983377 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" event={"ID":"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5","Type":"ContainerStarted","Data":"2d40c3b26fb6e16e395652dd9d72167a0cd7349144233c2a06de2d4368cd3cc0"} Jan 26 09:47:02 crc kubenswrapper[4827]: I0126 09:47:02.003917 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" podStartSLOduration=1.5221439129999998 podStartE2EDuration="2.003893793s" podCreationTimestamp="2026-01-26 09:47:00 +0000 UTC" firstStartedPulling="2026-01-26 09:47:00.964471799 +0000 UTC m=+2449.613143628" lastFinishedPulling="2026-01-26 09:47:01.446221689 +0000 UTC m=+2450.094893508" observedRunningTime="2026-01-26 09:47:02.001427205 +0000 UTC m=+2450.650099054" watchObservedRunningTime="2026-01-26 09:47:02.003893793 +0000 UTC m=+2450.652565622" Jan 26 09:47:10 crc kubenswrapper[4827]: I0126 09:47:10.703473 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:47:10 crc kubenswrapper[4827]: E0126 09:47:10.704235 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:47:22 crc kubenswrapper[4827]: I0126 09:47:22.702816 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:47:22 crc kubenswrapper[4827]: E0126 09:47:22.703893 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:47:35 crc kubenswrapper[4827]: I0126 09:47:35.703003 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:47:35 crc kubenswrapper[4827]: E0126 09:47:35.703763 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:47:44 crc kubenswrapper[4827]: I0126 09:47:44.401321 4827 generic.go:334] "Generic (PLEG): container finished" podID="e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" containerID="2d40c3b26fb6e16e395652dd9d72167a0cd7349144233c2a06de2d4368cd3cc0" exitCode=0 Jan 26 09:47:44 crc kubenswrapper[4827]: I0126 09:47:44.401406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" event={"ID":"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5","Type":"ContainerDied","Data":"2d40c3b26fb6e16e395652dd9d72167a0cd7349144233c2a06de2d4368cd3cc0"} Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.834529 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.901123 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory\") pod \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.901245 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsk5g\" (UniqueName: \"kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g\") pod \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.901290 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph\") pod \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.901381 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam\") pod \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\" (UID: \"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5\") " Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.908725 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g" (OuterVolumeSpecName: "kube-api-access-lsk5g") pod "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" (UID: "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5"). InnerVolumeSpecName "kube-api-access-lsk5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.919928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph" (OuterVolumeSpecName: "ceph") pod "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" (UID: "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.932606 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory" (OuterVolumeSpecName: "inventory") pod "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" (UID: "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:45 crc kubenswrapper[4827]: I0126 09:47:45.946779 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" (UID: "e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.004089 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.004139 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.004154 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.004168 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsk5g\" (UniqueName: \"kubernetes.io/projected/e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5-kube-api-access-lsk5g\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.425170 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" event={"ID":"e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5","Type":"ContainerDied","Data":"f7853114344ca187d79f1ee93a6dc67c61891a6c2af3bdca5ac431ea4d3709bb"} Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.425225 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7853114344ca187d79f1ee93a6dc67c61891a6c2af3bdca5ac431ea4d3709bb" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.425305 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gffwb" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.534734 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k"] Jan 26 09:47:46 crc kubenswrapper[4827]: E0126 09:47:46.535194 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.535214 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.535404 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.536260 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.551200 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.551395 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.551498 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.551213 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.551917 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.552978 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k"] Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.702850 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:47:46 crc kubenswrapper[4827]: E0126 09:47:46.703203 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.716341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.716689 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n55b\" (UniqueName: \"kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.716791 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.716894 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.818711 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.818943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n55b\" (UniqueName: \"kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.818988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.819025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.824150 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.826123 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.828457 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.855903 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n55b\" (UniqueName: \"kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:46 crc kubenswrapper[4827]: I0126 09:47:46.862054 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:47 crc kubenswrapper[4827]: I0126 09:47:47.522520 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k"] Jan 26 09:47:47 crc kubenswrapper[4827]: W0126 09:47:47.530560 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3096f06_9fdd_406d_9200_1fa4a2db5006.slice/crio-fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172 WatchSource:0}: Error finding container fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172: Status 404 returned error can't find the container with id fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172 Jan 26 09:47:48 crc kubenswrapper[4827]: I0126 09:47:48.445126 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" event={"ID":"c3096f06-9fdd-406d-9200-1fa4a2db5006","Type":"ContainerStarted","Data":"b4d5b958539c224f608b2df9a892c2cd36c310cb0fa6008a13b36385b7e3a030"} Jan 26 09:47:48 crc kubenswrapper[4827]: I0126 09:47:48.445483 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" event={"ID":"c3096f06-9fdd-406d-9200-1fa4a2db5006","Type":"ContainerStarted","Data":"fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172"} Jan 26 09:47:48 crc kubenswrapper[4827]: I0126 09:47:48.462842 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" podStartSLOduration=1.931348801 podStartE2EDuration="2.462821999s" podCreationTimestamp="2026-01-26 09:47:46 +0000 UTC" firstStartedPulling="2026-01-26 09:47:47.535201158 +0000 UTC m=+2496.183872987" lastFinishedPulling="2026-01-26 09:47:48.066674326 +0000 UTC m=+2496.715346185" observedRunningTime="2026-01-26 09:47:48.462746907 +0000 UTC m=+2497.111418726" watchObservedRunningTime="2026-01-26 09:47:48.462821999 +0000 UTC m=+2497.111493828" Jan 26 09:47:52 crc kubenswrapper[4827]: I0126 09:47:52.512050 4827 generic.go:334] "Generic (PLEG): container finished" podID="c3096f06-9fdd-406d-9200-1fa4a2db5006" containerID="b4d5b958539c224f608b2df9a892c2cd36c310cb0fa6008a13b36385b7e3a030" exitCode=0 Jan 26 09:47:52 crc kubenswrapper[4827]: I0126 09:47:52.512164 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" event={"ID":"c3096f06-9fdd-406d-9200-1fa4a2db5006","Type":"ContainerDied","Data":"b4d5b958539c224f608b2df9a892c2cd36c310cb0fa6008a13b36385b7e3a030"} Jan 26 09:47:53 crc kubenswrapper[4827]: I0126 09:47:53.943307 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.050268 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph\") pod \"c3096f06-9fdd-406d-9200-1fa4a2db5006\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.050361 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory\") pod \"c3096f06-9fdd-406d-9200-1fa4a2db5006\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.050578 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n55b\" (UniqueName: \"kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b\") pod \"c3096f06-9fdd-406d-9200-1fa4a2db5006\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.051272 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam\") pod \"c3096f06-9fdd-406d-9200-1fa4a2db5006\" (UID: \"c3096f06-9fdd-406d-9200-1fa4a2db5006\") " Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.057051 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph" (OuterVolumeSpecName: "ceph") pod "c3096f06-9fdd-406d-9200-1fa4a2db5006" (UID: "c3096f06-9fdd-406d-9200-1fa4a2db5006"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.058931 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b" (OuterVolumeSpecName: "kube-api-access-9n55b") pod "c3096f06-9fdd-406d-9200-1fa4a2db5006" (UID: "c3096f06-9fdd-406d-9200-1fa4a2db5006"). InnerVolumeSpecName "kube-api-access-9n55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.087815 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3096f06-9fdd-406d-9200-1fa4a2db5006" (UID: "c3096f06-9fdd-406d-9200-1fa4a2db5006"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.095744 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory" (OuterVolumeSpecName: "inventory") pod "c3096f06-9fdd-406d-9200-1fa4a2db5006" (UID: "c3096f06-9fdd-406d-9200-1fa4a2db5006"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.154263 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.154310 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.154325 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n55b\" (UniqueName: \"kubernetes.io/projected/c3096f06-9fdd-406d-9200-1fa4a2db5006-kube-api-access-9n55b\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.154340 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3096f06-9fdd-406d-9200-1fa4a2db5006-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.538035 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" event={"ID":"c3096f06-9fdd-406d-9200-1fa4a2db5006","Type":"ContainerDied","Data":"fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172"} Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.538357 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9ac632e39cfca91cb2deb429fa1939866f7b060d635c0c12d0929e2dea9172" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.538137 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.645775 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv"] Jan 26 09:47:54 crc kubenswrapper[4827]: E0126 09:47:54.646196 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3096f06-9fdd-406d-9200-1fa4a2db5006" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.646219 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3096f06-9fdd-406d-9200-1fa4a2db5006" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.646428 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3096f06-9fdd-406d-9200-1fa4a2db5006" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.648133 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.650515 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.650717 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.653534 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.653968 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.654067 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.675538 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv"] Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.765360 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.765598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpj4\" (UniqueName: \"kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.765754 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.765870 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.867364 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.867897 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.868110 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.868467 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpj4\" (UniqueName: \"kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.871888 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.872793 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.876155 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.886995 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpj4\" (UniqueName: \"kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x26cv\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:54 crc kubenswrapper[4827]: I0126 09:47:54.966132 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:47:55 crc kubenswrapper[4827]: I0126 09:47:55.565458 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv"] Jan 26 09:47:55 crc kubenswrapper[4827]: W0126 09:47:55.572487 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44eb77f_7f7f_461f_b0aa_cbd347852699.slice/crio-203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53 WatchSource:0}: Error finding container 203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53: Status 404 returned error can't find the container with id 203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53 Jan 26 09:47:56 crc kubenswrapper[4827]: I0126 09:47:56.554684 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" event={"ID":"c44eb77f-7f7f-461f-b0aa-cbd347852699","Type":"ContainerStarted","Data":"c20f7e55185038e9a5481fdcea8085913561fb39fb76515bfcfaf134c173e2ea"} Jan 26 09:47:56 crc kubenswrapper[4827]: I0126 09:47:56.555269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" event={"ID":"c44eb77f-7f7f-461f-b0aa-cbd347852699","Type":"ContainerStarted","Data":"203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53"} Jan 26 09:47:56 crc kubenswrapper[4827]: I0126 09:47:56.573167 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" podStartSLOduration=2.174858403 podStartE2EDuration="2.573148645s" podCreationTimestamp="2026-01-26 09:47:54 +0000 UTC" firstStartedPulling="2026-01-26 09:47:55.585979625 +0000 UTC m=+2504.234651444" lastFinishedPulling="2026-01-26 09:47:55.984269867 +0000 UTC m=+2504.632941686" observedRunningTime="2026-01-26 09:47:56.570378709 +0000 UTC m=+2505.219050528" watchObservedRunningTime="2026-01-26 09:47:56.573148645 +0000 UTC m=+2505.221820464" Jan 26 09:47:59 crc kubenswrapper[4827]: I0126 09:47:59.703278 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:47:59 crc kubenswrapper[4827]: E0126 09:47:59.703955 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:48:12 crc kubenswrapper[4827]: I0126 09:48:12.702777 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:48:12 crc kubenswrapper[4827]: E0126 09:48:12.703427 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:48:25 crc kubenswrapper[4827]: I0126 09:48:25.703185 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:48:25 crc kubenswrapper[4827]: E0126 09:48:25.704078 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:48:38 crc kubenswrapper[4827]: I0126 09:48:38.703845 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:48:38 crc kubenswrapper[4827]: E0126 09:48:38.704687 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:48:46 crc kubenswrapper[4827]: I0126 09:48:46.039034 4827 generic.go:334] "Generic (PLEG): container finished" podID="c44eb77f-7f7f-461f-b0aa-cbd347852699" containerID="c20f7e55185038e9a5481fdcea8085913561fb39fb76515bfcfaf134c173e2ea" exitCode=0 Jan 26 09:48:46 crc kubenswrapper[4827]: I0126 09:48:46.039152 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" event={"ID":"c44eb77f-7f7f-461f-b0aa-cbd347852699","Type":"ContainerDied","Data":"c20f7e55185038e9a5481fdcea8085913561fb39fb76515bfcfaf134c173e2ea"} Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.521525 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.582258 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph\") pod \"c44eb77f-7f7f-461f-b0aa-cbd347852699\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.582333 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory\") pod \"c44eb77f-7f7f-461f-b0aa-cbd347852699\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.582396 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam\") pod \"c44eb77f-7f7f-461f-b0aa-cbd347852699\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.582459 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mpj4\" (UniqueName: \"kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4\") pod \"c44eb77f-7f7f-461f-b0aa-cbd347852699\" (UID: \"c44eb77f-7f7f-461f-b0aa-cbd347852699\") " Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.592327 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4" (OuterVolumeSpecName: "kube-api-access-6mpj4") pod "c44eb77f-7f7f-461f-b0aa-cbd347852699" (UID: "c44eb77f-7f7f-461f-b0aa-cbd347852699"). InnerVolumeSpecName "kube-api-access-6mpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.592746 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph" (OuterVolumeSpecName: "ceph") pod "c44eb77f-7f7f-461f-b0aa-cbd347852699" (UID: "c44eb77f-7f7f-461f-b0aa-cbd347852699"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.607622 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c44eb77f-7f7f-461f-b0aa-cbd347852699" (UID: "c44eb77f-7f7f-461f-b0aa-cbd347852699"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.610296 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory" (OuterVolumeSpecName: "inventory") pod "c44eb77f-7f7f-461f-b0aa-cbd347852699" (UID: "c44eb77f-7f7f-461f-b0aa-cbd347852699"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.684116 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mpj4\" (UniqueName: \"kubernetes.io/projected/c44eb77f-7f7f-461f-b0aa-cbd347852699-kube-api-access-6mpj4\") on node \"crc\" DevicePath \"\"" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.684148 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.684158 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:48:47 crc kubenswrapper[4827]: I0126 09:48:47.684166 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c44eb77f-7f7f-461f-b0aa-cbd347852699-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.062714 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" event={"ID":"c44eb77f-7f7f-461f-b0aa-cbd347852699","Type":"ContainerDied","Data":"203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53"} Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.063051 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203e0bff5f5e476e7a37802c89648d4c05ace24b0e12b55bbd05603fe9e64e53" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.062898 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x26cv" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.179564 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pjkdr"] Jan 26 09:48:48 crc kubenswrapper[4827]: E0126 09:48:48.180311 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44eb77f-7f7f-461f-b0aa-cbd347852699" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.180359 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44eb77f-7f7f-461f-b0aa-cbd347852699" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.180713 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44eb77f-7f7f-461f-b0aa-cbd347852699" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.181629 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.184886 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.185366 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.187124 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.190716 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.191569 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.195587 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.195792 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.196329 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.196545 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49zd\" (UniqueName: \"kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.198847 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pjkdr"] Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.297698 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49zd\" (UniqueName: \"kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.297763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.297811 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.297929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.302169 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.308305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.308412 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.315319 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49zd\" (UniqueName: \"kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd\") pod \"ssh-known-hosts-edpm-deployment-pjkdr\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:48 crc kubenswrapper[4827]: I0126 09:48:48.499708 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:48:49 crc kubenswrapper[4827]: I0126 09:48:49.053670 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pjkdr"] Jan 26 09:48:49 crc kubenswrapper[4827]: W0126 09:48:49.057074 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ded20fe_a752_4b6f_94a3_b07079038103.slice/crio-643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3 WatchSource:0}: Error finding container 643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3: Status 404 returned error can't find the container with id 643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3 Jan 26 09:48:49 crc kubenswrapper[4827]: I0126 09:48:49.071919 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" event={"ID":"9ded20fe-a752-4b6f-94a3-b07079038103","Type":"ContainerStarted","Data":"643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3"} Jan 26 09:48:50 crc kubenswrapper[4827]: I0126 09:48:50.085125 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" event={"ID":"9ded20fe-a752-4b6f-94a3-b07079038103","Type":"ContainerStarted","Data":"6ae09c40562ebb736c62b7c421f4fe4acd73e5225aa07bce0d04050e2c2a8c02"} Jan 26 09:48:50 crc kubenswrapper[4827]: I0126 09:48:50.104104 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" podStartSLOduration=1.570040719 podStartE2EDuration="2.104082877s" podCreationTimestamp="2026-01-26 09:48:48 +0000 UTC" firstStartedPulling="2026-01-26 09:48:49.059837967 +0000 UTC m=+2557.708509786" lastFinishedPulling="2026-01-26 09:48:49.593880125 +0000 UTC m=+2558.242551944" observedRunningTime="2026-01-26 09:48:50.099021947 +0000 UTC m=+2558.747693786" watchObservedRunningTime="2026-01-26 09:48:50.104082877 +0000 UTC m=+2558.752754696" Jan 26 09:48:53 crc kubenswrapper[4827]: I0126 09:48:53.703302 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:48:54 crc kubenswrapper[4827]: I0126 09:48:54.138228 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb"} Jan 26 09:49:00 crc kubenswrapper[4827]: I0126 09:49:00.194187 4827 generic.go:334] "Generic (PLEG): container finished" podID="9ded20fe-a752-4b6f-94a3-b07079038103" containerID="6ae09c40562ebb736c62b7c421f4fe4acd73e5225aa07bce0d04050e2c2a8c02" exitCode=0 Jan 26 09:49:00 crc kubenswrapper[4827]: I0126 09:49:00.194296 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" event={"ID":"9ded20fe-a752-4b6f-94a3-b07079038103","Type":"ContainerDied","Data":"6ae09c40562ebb736c62b7c421f4fe4acd73e5225aa07bce0d04050e2c2a8c02"} Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.593603 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.758662 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49zd\" (UniqueName: \"kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd\") pod \"9ded20fe-a752-4b6f-94a3-b07079038103\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.758821 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph\") pod \"9ded20fe-a752-4b6f-94a3-b07079038103\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.759708 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam\") pod \"9ded20fe-a752-4b6f-94a3-b07079038103\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.759767 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0\") pod \"9ded20fe-a752-4b6f-94a3-b07079038103\" (UID: \"9ded20fe-a752-4b6f-94a3-b07079038103\") " Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.765973 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd" (OuterVolumeSpecName: "kube-api-access-b49zd") pod "9ded20fe-a752-4b6f-94a3-b07079038103" (UID: "9ded20fe-a752-4b6f-94a3-b07079038103"). InnerVolumeSpecName "kube-api-access-b49zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.766942 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph" (OuterVolumeSpecName: "ceph") pod "9ded20fe-a752-4b6f-94a3-b07079038103" (UID: "9ded20fe-a752-4b6f-94a3-b07079038103"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.785839 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9ded20fe-a752-4b6f-94a3-b07079038103" (UID: "9ded20fe-a752-4b6f-94a3-b07079038103"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.804011 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ded20fe-a752-4b6f-94a3-b07079038103" (UID: "9ded20fe-a752-4b6f-94a3-b07079038103"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.862601 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.862632 4827 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.862658 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49zd\" (UniqueName: \"kubernetes.io/projected/9ded20fe-a752-4b6f-94a3-b07079038103-kube-api-access-b49zd\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:01 crc kubenswrapper[4827]: I0126 09:49:01.862667 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ded20fe-a752-4b6f-94a3-b07079038103-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.217923 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" event={"ID":"9ded20fe-a752-4b6f-94a3-b07079038103","Type":"ContainerDied","Data":"643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3"} Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.218568 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643bcfc528b5204c303bbce703a94805555e8088e6ffdf717c3044a860c135b3" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.218088 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pjkdr" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.313251 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm"] Jan 26 09:49:02 crc kubenswrapper[4827]: E0126 09:49:02.313789 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ded20fe-a752-4b6f-94a3-b07079038103" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.313852 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ded20fe-a752-4b6f-94a3-b07079038103" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.314084 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ded20fe-a752-4b6f-94a3-b07079038103" containerName="ssh-known-hosts-edpm-deployment" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.314624 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.318934 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.319208 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.319392 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.319530 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.320230 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.338155 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm"] Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.372284 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.372374 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrwr\" (UniqueName: \"kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.372441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.372481 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.473994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrwr\" (UniqueName: \"kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.474065 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.474099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.474138 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.479375 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.480245 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.490185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.491539 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrwr\" (UniqueName: \"kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rkbpm\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:02 crc kubenswrapper[4827]: I0126 09:49:02.632256 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:03 crc kubenswrapper[4827]: I0126 09:49:03.027412 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm"] Jan 26 09:49:03 crc kubenswrapper[4827]: I0126 09:49:03.226413 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" event={"ID":"cf1f3c72-ea59-4949-aec9-51d06e078251","Type":"ContainerStarted","Data":"68858290cc0c4c6839a7674832a3af692375af7e07ea6270e903b4bd8ce43efc"} Jan 26 09:49:04 crc kubenswrapper[4827]: I0126 09:49:04.241093 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" event={"ID":"cf1f3c72-ea59-4949-aec9-51d06e078251","Type":"ContainerStarted","Data":"5584f887c81d7b2bb6da499dbb30b3e3b82f6feb5888a42489d1fa425009591c"} Jan 26 09:49:04 crc kubenswrapper[4827]: I0126 09:49:04.261191 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" podStartSLOduration=1.862535082 podStartE2EDuration="2.261166283s" podCreationTimestamp="2026-01-26 09:49:02 +0000 UTC" firstStartedPulling="2026-01-26 09:49:03.042683168 +0000 UTC m=+2571.691354987" lastFinishedPulling="2026-01-26 09:49:03.441314329 +0000 UTC m=+2572.089986188" observedRunningTime="2026-01-26 09:49:04.26029867 +0000 UTC m=+2572.908970499" watchObservedRunningTime="2026-01-26 09:49:04.261166283 +0000 UTC m=+2572.909838112" Jan 26 09:49:11 crc kubenswrapper[4827]: I0126 09:49:11.325473 4827 generic.go:334] "Generic (PLEG): container finished" podID="cf1f3c72-ea59-4949-aec9-51d06e078251" containerID="5584f887c81d7b2bb6da499dbb30b3e3b82f6feb5888a42489d1fa425009591c" exitCode=0 Jan 26 09:49:11 crc kubenswrapper[4827]: I0126 09:49:11.325628 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" event={"ID":"cf1f3c72-ea59-4949-aec9-51d06e078251","Type":"ContainerDied","Data":"5584f887c81d7b2bb6da499dbb30b3e3b82f6feb5888a42489d1fa425009591c"} Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.744131 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.818117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrwr\" (UniqueName: \"kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr\") pod \"cf1f3c72-ea59-4949-aec9-51d06e078251\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.818307 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam\") pod \"cf1f3c72-ea59-4949-aec9-51d06e078251\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.818329 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory\") pod \"cf1f3c72-ea59-4949-aec9-51d06e078251\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.818356 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph\") pod \"cf1f3c72-ea59-4949-aec9-51d06e078251\" (UID: \"cf1f3c72-ea59-4949-aec9-51d06e078251\") " Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.823602 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr" (OuterVolumeSpecName: "kube-api-access-dtrwr") pod "cf1f3c72-ea59-4949-aec9-51d06e078251" (UID: "cf1f3c72-ea59-4949-aec9-51d06e078251"). InnerVolumeSpecName "kube-api-access-dtrwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.824137 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph" (OuterVolumeSpecName: "ceph") pod "cf1f3c72-ea59-4949-aec9-51d06e078251" (UID: "cf1f3c72-ea59-4949-aec9-51d06e078251"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.842547 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf1f3c72-ea59-4949-aec9-51d06e078251" (UID: "cf1f3c72-ea59-4949-aec9-51d06e078251"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.852136 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory" (OuterVolumeSpecName: "inventory") pod "cf1f3c72-ea59-4949-aec9-51d06e078251" (UID: "cf1f3c72-ea59-4949-aec9-51d06e078251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.919868 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.919899 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtrwr\" (UniqueName: \"kubernetes.io/projected/cf1f3c72-ea59-4949-aec9-51d06e078251-kube-api-access-dtrwr\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.919910 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:12 crc kubenswrapper[4827]: I0126 09:49:12.919920 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf1f3c72-ea59-4949-aec9-51d06e078251-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.354787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" event={"ID":"cf1f3c72-ea59-4949-aec9-51d06e078251","Type":"ContainerDied","Data":"68858290cc0c4c6839a7674832a3af692375af7e07ea6270e903b4bd8ce43efc"} Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.354921 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68858290cc0c4c6839a7674832a3af692375af7e07ea6270e903b4bd8ce43efc" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.354854 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rkbpm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.430194 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm"] Jan 26 09:49:13 crc kubenswrapper[4827]: E0126 09:49:13.430549 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1f3c72-ea59-4949-aec9-51d06e078251" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.430570 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1f3c72-ea59-4949-aec9-51d06e078251" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.430744 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1f3c72-ea59-4949-aec9-51d06e078251" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.431853 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.434439 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.434489 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.434788 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.439037 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.443252 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.463625 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm"] Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.529000 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.529532 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.529739 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.529863 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkz69\" (UniqueName: \"kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.632161 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.632258 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.632408 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.632457 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkz69\" (UniqueName: \"kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.636233 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.636405 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.640387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.654942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkz69\" (UniqueName: \"kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:13 crc kubenswrapper[4827]: I0126 09:49:13.750415 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:14 crc kubenswrapper[4827]: I0126 09:49:14.330411 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm"] Jan 26 09:49:14 crc kubenswrapper[4827]: I0126 09:49:14.364189 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" event={"ID":"bb198638-c527-412e-96ae-d0cdc3c4abbd","Type":"ContainerStarted","Data":"ff4831270b9909164d79eb349efa66c2514887b6acd3b6a3bb12ff0f8d140d7d"} Jan 26 09:49:15 crc kubenswrapper[4827]: I0126 09:49:15.373943 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" event={"ID":"bb198638-c527-412e-96ae-d0cdc3c4abbd","Type":"ContainerStarted","Data":"d2e3f1cae08191f795916e88152b6a4d018ef338e388cc452b6da6d234df895f"} Jan 26 09:49:15 crc kubenswrapper[4827]: I0126 09:49:15.407805 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" podStartSLOduration=1.844463791 podStartE2EDuration="2.407774765s" podCreationTimestamp="2026-01-26 09:49:13 +0000 UTC" firstStartedPulling="2026-01-26 09:49:14.348198393 +0000 UTC m=+2582.996870212" lastFinishedPulling="2026-01-26 09:49:14.911509367 +0000 UTC m=+2583.560181186" observedRunningTime="2026-01-26 09:49:15.391557689 +0000 UTC m=+2584.040229508" watchObservedRunningTime="2026-01-26 09:49:15.407774765 +0000 UTC m=+2584.056446614" Jan 26 09:49:25 crc kubenswrapper[4827]: I0126 09:49:25.495908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" event={"ID":"bb198638-c527-412e-96ae-d0cdc3c4abbd","Type":"ContainerDied","Data":"d2e3f1cae08191f795916e88152b6a4d018ef338e388cc452b6da6d234df895f"} Jan 26 09:49:25 crc kubenswrapper[4827]: I0126 09:49:25.496036 4827 generic.go:334] "Generic (PLEG): container finished" podID="bb198638-c527-412e-96ae-d0cdc3c4abbd" containerID="d2e3f1cae08191f795916e88152b6a4d018ef338e388cc452b6da6d234df895f" exitCode=0 Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.032691 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.192018 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam\") pod \"bb198638-c527-412e-96ae-d0cdc3c4abbd\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.192116 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkz69\" (UniqueName: \"kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69\") pod \"bb198638-c527-412e-96ae-d0cdc3c4abbd\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.192218 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph\") pod \"bb198638-c527-412e-96ae-d0cdc3c4abbd\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.192305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory\") pod \"bb198638-c527-412e-96ae-d0cdc3c4abbd\" (UID: \"bb198638-c527-412e-96ae-d0cdc3c4abbd\") " Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.199273 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph" (OuterVolumeSpecName: "ceph") pod "bb198638-c527-412e-96ae-d0cdc3c4abbd" (UID: "bb198638-c527-412e-96ae-d0cdc3c4abbd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.211906 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69" (OuterVolumeSpecName: "kube-api-access-fkz69") pod "bb198638-c527-412e-96ae-d0cdc3c4abbd" (UID: "bb198638-c527-412e-96ae-d0cdc3c4abbd"). InnerVolumeSpecName "kube-api-access-fkz69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.226169 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb198638-c527-412e-96ae-d0cdc3c4abbd" (UID: "bb198638-c527-412e-96ae-d0cdc3c4abbd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.226820 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory" (OuterVolumeSpecName: "inventory") pod "bb198638-c527-412e-96ae-d0cdc3c4abbd" (UID: "bb198638-c527-412e-96ae-d0cdc3c4abbd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.294895 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.294929 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.294941 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb198638-c527-412e-96ae-d0cdc3c4abbd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.294950 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkz69\" (UniqueName: \"kubernetes.io/projected/bb198638-c527-412e-96ae-d0cdc3c4abbd-kube-api-access-fkz69\") on node \"crc\" DevicePath \"\"" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.520956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" event={"ID":"bb198638-c527-412e-96ae-d0cdc3c4abbd","Type":"ContainerDied","Data":"ff4831270b9909164d79eb349efa66c2514887b6acd3b6a3bb12ff0f8d140d7d"} Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.521017 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff4831270b9909164d79eb349efa66c2514887b6acd3b6a3bb12ff0f8d140d7d" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.521083 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.639052 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7"] Jan 26 09:49:27 crc kubenswrapper[4827]: E0126 09:49:27.639484 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb198638-c527-412e-96ae-d0cdc3c4abbd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.639507 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb198638-c527-412e-96ae-d0cdc3c4abbd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.640134 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb198638-c527-412e-96ae-d0cdc3c4abbd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.641009 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.644130 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.644468 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.644806 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.644828 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.644998 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.645154 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.645162 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.646114 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.662319 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7"] Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.804796 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.804860 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806202 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806585 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806735 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806820 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806877 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.806904 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh7v\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.807327 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.808154 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.808234 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.808263 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910185 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910237 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910328 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910373 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910530 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910569 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910601 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910668 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910712 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910743 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjh7v\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.910785 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.915414 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.918896 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.919356 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.921451 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.921876 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.926417 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.926432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.928408 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.929718 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.929991 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.930886 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.931105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.951329 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjh7v\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:27 crc kubenswrapper[4827]: I0126 09:49:27.960260 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:49:28 crc kubenswrapper[4827]: I0126 09:49:28.532232 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7"] Jan 26 09:49:29 crc kubenswrapper[4827]: I0126 09:49:29.539762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" event={"ID":"c4931278-b623-46c2-8444-9a7b75093703","Type":"ContainerStarted","Data":"5d64200794e9f9d389d08ad04b5b4ef0f87092474a451368c6d95d12fe202f4d"} Jan 26 09:49:29 crc kubenswrapper[4827]: I0126 09:49:29.540706 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" event={"ID":"c4931278-b623-46c2-8444-9a7b75093703","Type":"ContainerStarted","Data":"ce6fe0556668e9880b163492463f59c4f3fb1e68fec14156edfe88d1bd4664e3"} Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.253938 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" podStartSLOduration=28.761694143 podStartE2EDuration="29.253915481s" podCreationTimestamp="2026-01-26 09:49:27 +0000 UTC" firstStartedPulling="2026-01-26 09:49:28.546976268 +0000 UTC m=+2597.195648097" lastFinishedPulling="2026-01-26 09:49:29.039197606 +0000 UTC m=+2597.687869435" observedRunningTime="2026-01-26 09:49:29.567006432 +0000 UTC m=+2598.215678251" watchObservedRunningTime="2026-01-26 09:49:56.253915481 +0000 UTC m=+2624.902587310" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.262472 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.270503 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.282173 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.347079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.347350 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb8tl\" (UniqueName: \"kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.347485 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.448751 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.448807 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb8tl\" (UniqueName: \"kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.448865 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.449301 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.449564 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.472581 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb8tl\" (UniqueName: \"kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl\") pod \"redhat-operators-mnpzh\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:56 crc kubenswrapper[4827]: I0126 09:49:56.599516 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:49:57 crc kubenswrapper[4827]: I0126 09:49:57.089602 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:49:57 crc kubenswrapper[4827]: I0126 09:49:57.887836 4827 generic.go:334] "Generic (PLEG): container finished" podID="d4362eae-b6c8-415e-bc91-db54839624de" containerID="7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57" exitCode=0 Jan 26 09:49:57 crc kubenswrapper[4827]: I0126 09:49:57.888172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerDied","Data":"7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57"} Jan 26 09:49:57 crc kubenswrapper[4827]: I0126 09:49:57.888208 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerStarted","Data":"ef51658d7764eab4dd43880a2dabfebcba79e6320fdfc359f9a0a2ec53ebc4bc"} Jan 26 09:49:58 crc kubenswrapper[4827]: I0126 09:49:58.898273 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerStarted","Data":"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3"} Jan 26 09:50:01 crc kubenswrapper[4827]: I0126 09:50:01.928083 4827 generic.go:334] "Generic (PLEG): container finished" podID="d4362eae-b6c8-415e-bc91-db54839624de" containerID="2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3" exitCode=0 Jan 26 09:50:01 crc kubenswrapper[4827]: I0126 09:50:01.928191 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerDied","Data":"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3"} Jan 26 09:50:02 crc kubenswrapper[4827]: I0126 09:50:02.938126 4827 generic.go:334] "Generic (PLEG): container finished" podID="c4931278-b623-46c2-8444-9a7b75093703" containerID="5d64200794e9f9d389d08ad04b5b4ef0f87092474a451368c6d95d12fe202f4d" exitCode=0 Jan 26 09:50:02 crc kubenswrapper[4827]: I0126 09:50:02.938485 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" event={"ID":"c4931278-b623-46c2-8444-9a7b75093703","Type":"ContainerDied","Data":"5d64200794e9f9d389d08ad04b5b4ef0f87092474a451368c6d95d12fe202f4d"} Jan 26 09:50:02 crc kubenswrapper[4827]: I0126 09:50:02.942177 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerStarted","Data":"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20"} Jan 26 09:50:02 crc kubenswrapper[4827]: I0126 09:50:02.980682 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mnpzh" podStartSLOduration=2.429857784 podStartE2EDuration="6.980661767s" podCreationTimestamp="2026-01-26 09:49:56 +0000 UTC" firstStartedPulling="2026-01-26 09:49:57.896966138 +0000 UTC m=+2626.545637957" lastFinishedPulling="2026-01-26 09:50:02.447770121 +0000 UTC m=+2631.096441940" observedRunningTime="2026-01-26 09:50:02.973959432 +0000 UTC m=+2631.622631281" watchObservedRunningTime="2026-01-26 09:50:02.980661767 +0000 UTC m=+2631.629333586" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.421653 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496391 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjh7v\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496742 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496770 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496798 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496824 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496860 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496906 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496922 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496945 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.496980 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.497010 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.497032 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.497051 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle\") pod \"c4931278-b623-46c2-8444-9a7b75093703\" (UID: \"c4931278-b623-46c2-8444-9a7b75093703\") " Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.504996 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph" (OuterVolumeSpecName: "ceph") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.505065 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v" (OuterVolumeSpecName: "kube-api-access-zjh7v") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "kube-api-access-zjh7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.507568 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.507932 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.508074 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.513744 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.513790 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.513794 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.513876 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.526399 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.528771 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.530839 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory" (OuterVolumeSpecName: "inventory") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.537139 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4931278-b623-46c2-8444-9a7b75093703" (UID: "c4931278-b623-46c2-8444-9a7b75093703"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599491 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599533 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599552 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599567 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjh7v\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-kube-api-access-zjh7v\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599578 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599589 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599602 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599627 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599654 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599667 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599679 4827 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599691 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4931278-b623-46c2-8444-9a7b75093703-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.599702 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4931278-b623-46c2-8444-9a7b75093703-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.962991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" event={"ID":"c4931278-b623-46c2-8444-9a7b75093703","Type":"ContainerDied","Data":"ce6fe0556668e9880b163492463f59c4f3fb1e68fec14156edfe88d1bd4664e3"} Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.963062 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6fe0556668e9880b163492463f59c4f3fb1e68fec14156edfe88d1bd4664e3" Jan 26 09:50:04 crc kubenswrapper[4827]: I0126 09:50:04.963167 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.168749 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff"] Jan 26 09:50:05 crc kubenswrapper[4827]: E0126 09:50:05.169283 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4931278-b623-46c2-8444-9a7b75093703" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.169312 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4931278-b623-46c2-8444-9a7b75093703" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.189013 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4931278-b623-46c2-8444-9a7b75093703" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.190418 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.197354 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.197457 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.197791 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.198103 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.198238 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.199587 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff"] Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.209604 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.209718 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.209769 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.209795 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddv5\" (UniqueName: \"kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.311087 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.311157 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.311216 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.311242 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cddv5\" (UniqueName: \"kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.321625 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.331435 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.331937 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.338580 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cddv5\" (UniqueName: \"kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:05 crc kubenswrapper[4827]: I0126 09:50:05.531750 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:06 crc kubenswrapper[4827]: I0126 09:50:06.053804 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff"] Jan 26 09:50:06 crc kubenswrapper[4827]: I0126 09:50:06.599666 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:06 crc kubenswrapper[4827]: I0126 09:50:06.599893 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:06 crc kubenswrapper[4827]: I0126 09:50:06.985337 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" event={"ID":"9303c2b8-3943-40d2-b648-a7d24cf50214","Type":"ContainerStarted","Data":"cd6c7e8f96e5272155f503f3751936619cf351db1958386d062a4f24d2d55595"} Jan 26 09:50:06 crc kubenswrapper[4827]: I0126 09:50:06.985388 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" event={"ID":"9303c2b8-3943-40d2-b648-a7d24cf50214","Type":"ContainerStarted","Data":"e3a3876d89dfcd113ce3450aa46f6ddfceeb54d0cb9e1eaecd872c2f6adfa854"} Jan 26 09:50:07 crc kubenswrapper[4827]: I0126 09:50:07.009621 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" podStartSLOduration=1.56478933 podStartE2EDuration="2.009601003s" podCreationTimestamp="2026-01-26 09:50:05 +0000 UTC" firstStartedPulling="2026-01-26 09:50:06.068057219 +0000 UTC m=+2634.716729038" lastFinishedPulling="2026-01-26 09:50:06.512868892 +0000 UTC m=+2635.161540711" observedRunningTime="2026-01-26 09:50:07.00260408 +0000 UTC m=+2635.651275899" watchObservedRunningTime="2026-01-26 09:50:07.009601003 +0000 UTC m=+2635.658272822" Jan 26 09:50:07 crc kubenswrapper[4827]: I0126 09:50:07.664737 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mnpzh" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="registry-server" probeResult="failure" output=< Jan 26 09:50:07 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 09:50:07 crc kubenswrapper[4827]: > Jan 26 09:50:13 crc kubenswrapper[4827]: I0126 09:50:13.050207 4827 generic.go:334] "Generic (PLEG): container finished" podID="9303c2b8-3943-40d2-b648-a7d24cf50214" containerID="cd6c7e8f96e5272155f503f3751936619cf351db1958386d062a4f24d2d55595" exitCode=0 Jan 26 09:50:13 crc kubenswrapper[4827]: I0126 09:50:13.050426 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" event={"ID":"9303c2b8-3943-40d2-b648-a7d24cf50214","Type":"ContainerDied","Data":"cd6c7e8f96e5272155f503f3751936619cf351db1958386d062a4f24d2d55595"} Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.571746 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.717718 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph\") pod \"9303c2b8-3943-40d2-b648-a7d24cf50214\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.717820 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cddv5\" (UniqueName: \"kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5\") pod \"9303c2b8-3943-40d2-b648-a7d24cf50214\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.717858 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam\") pod \"9303c2b8-3943-40d2-b648-a7d24cf50214\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.717881 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory\") pod \"9303c2b8-3943-40d2-b648-a7d24cf50214\" (UID: \"9303c2b8-3943-40d2-b648-a7d24cf50214\") " Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.723122 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph" (OuterVolumeSpecName: "ceph") pod "9303c2b8-3943-40d2-b648-a7d24cf50214" (UID: "9303c2b8-3943-40d2-b648-a7d24cf50214"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.723226 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5" (OuterVolumeSpecName: "kube-api-access-cddv5") pod "9303c2b8-3943-40d2-b648-a7d24cf50214" (UID: "9303c2b8-3943-40d2-b648-a7d24cf50214"). InnerVolumeSpecName "kube-api-access-cddv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.748672 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9303c2b8-3943-40d2-b648-a7d24cf50214" (UID: "9303c2b8-3943-40d2-b648-a7d24cf50214"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.761221 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory" (OuterVolumeSpecName: "inventory") pod "9303c2b8-3943-40d2-b648-a7d24cf50214" (UID: "9303c2b8-3943-40d2-b648-a7d24cf50214"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.821984 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cddv5\" (UniqueName: \"kubernetes.io/projected/9303c2b8-3943-40d2-b648-a7d24cf50214-kube-api-access-cddv5\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.822021 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.822034 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:14 crc kubenswrapper[4827]: I0126 09:50:14.822047 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9303c2b8-3943-40d2-b648-a7d24cf50214-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.070173 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" event={"ID":"9303c2b8-3943-40d2-b648-a7d24cf50214","Type":"ContainerDied","Data":"e3a3876d89dfcd113ce3450aa46f6ddfceeb54d0cb9e1eaecd872c2f6adfa854"} Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.070220 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a3876d89dfcd113ce3450aa46f6ddfceeb54d0cb9e1eaecd872c2f6adfa854" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.070281 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.166913 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh"] Jan 26 09:50:15 crc kubenswrapper[4827]: E0126 09:50:15.167240 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9303c2b8-3943-40d2-b648-a7d24cf50214" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.167256 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9303c2b8-3943-40d2-b648-a7d24cf50214" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.167412 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9303c2b8-3943-40d2-b648-a7d24cf50214" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.167954 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.171125 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.173020 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.173246 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.173253 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.173599 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.175767 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.184734 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh"] Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.229456 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.229541 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.229583 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrvd\" (UniqueName: \"kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.229840 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.229956 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.230155 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332142 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332194 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332224 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrvd\" (UniqueName: \"kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332301 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332355 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.332390 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.333450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.335609 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.335702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.346008 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.348948 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.350314 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrvd\" (UniqueName: \"kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xstrh\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:15 crc kubenswrapper[4827]: I0126 09:50:15.486561 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:50:16 crc kubenswrapper[4827]: I0126 09:50:15.999924 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh"] Jan 26 09:50:16 crc kubenswrapper[4827]: I0126 09:50:16.087300 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" event={"ID":"46064584-0d9c-4054-87ce-e417f22cd6ad","Type":"ContainerStarted","Data":"39dfb400d4f3cdad88500bc46e63699cdafa21a338962f07871b442a2faafc65"} Jan 26 09:50:16 crc kubenswrapper[4827]: I0126 09:50:16.696293 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:16 crc kubenswrapper[4827]: I0126 09:50:16.765030 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:16 crc kubenswrapper[4827]: I0126 09:50:16.938057 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:50:17 crc kubenswrapper[4827]: I0126 09:50:17.096728 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" event={"ID":"46064584-0d9c-4054-87ce-e417f22cd6ad","Type":"ContainerStarted","Data":"90e314fe3fff6709a9a86c4df6c628eafd0d23cf2803ec6f82ed0db0d139e014"} Jan 26 09:50:17 crc kubenswrapper[4827]: I0126 09:50:17.141061 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" podStartSLOduration=1.602450052 podStartE2EDuration="2.141042545s" podCreationTimestamp="2026-01-26 09:50:15 +0000 UTC" firstStartedPulling="2026-01-26 09:50:16.007644861 +0000 UTC m=+2644.656316680" lastFinishedPulling="2026-01-26 09:50:16.546237344 +0000 UTC m=+2645.194909173" observedRunningTime="2026-01-26 09:50:17.113904498 +0000 UTC m=+2645.762576327" watchObservedRunningTime="2026-01-26 09:50:17.141042545 +0000 UTC m=+2645.789714364" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.107176 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mnpzh" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="registry-server" containerID="cri-o://3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20" gracePeriod=2 Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.567502 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.701444 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content\") pod \"d4362eae-b6c8-415e-bc91-db54839624de\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.701499 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb8tl\" (UniqueName: \"kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl\") pod \"d4362eae-b6c8-415e-bc91-db54839624de\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.701593 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities\") pod \"d4362eae-b6c8-415e-bc91-db54839624de\" (UID: \"d4362eae-b6c8-415e-bc91-db54839624de\") " Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.702400 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities" (OuterVolumeSpecName: "utilities") pod "d4362eae-b6c8-415e-bc91-db54839624de" (UID: "d4362eae-b6c8-415e-bc91-db54839624de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.717705 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl" (OuterVolumeSpecName: "kube-api-access-jb8tl") pod "d4362eae-b6c8-415e-bc91-db54839624de" (UID: "d4362eae-b6c8-415e-bc91-db54839624de"). InnerVolumeSpecName "kube-api-access-jb8tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.804995 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb8tl\" (UniqueName: \"kubernetes.io/projected/d4362eae-b6c8-415e-bc91-db54839624de-kube-api-access-jb8tl\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.805043 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.842522 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4362eae-b6c8-415e-bc91-db54839624de" (UID: "d4362eae-b6c8-415e-bc91-db54839624de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:50:18 crc kubenswrapper[4827]: I0126 09:50:18.906820 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4362eae-b6c8-415e-bc91-db54839624de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.116969 4827 generic.go:334] "Generic (PLEG): container finished" podID="d4362eae-b6c8-415e-bc91-db54839624de" containerID="3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20" exitCode=0 Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.117011 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerDied","Data":"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20"} Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.117036 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnpzh" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.117053 4827 scope.go:117] "RemoveContainer" containerID="3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.117041 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnpzh" event={"ID":"d4362eae-b6c8-415e-bc91-db54839624de","Type":"ContainerDied","Data":"ef51658d7764eab4dd43880a2dabfebcba79e6320fdfc359f9a0a2ec53ebc4bc"} Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.137108 4827 scope.go:117] "RemoveContainer" containerID="2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.154617 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.165975 4827 scope.go:117] "RemoveContainer" containerID="7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.173957 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mnpzh"] Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.213816 4827 scope.go:117] "RemoveContainer" containerID="3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20" Jan 26 09:50:19 crc kubenswrapper[4827]: E0126 09:50:19.215037 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20\": container with ID starting with 3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20 not found: ID does not exist" containerID="3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.215081 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20"} err="failed to get container status \"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20\": rpc error: code = NotFound desc = could not find container \"3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20\": container with ID starting with 3e158037224041addcec4fdb2810822fa9ed62192bda4d4c52b0d64554183a20 not found: ID does not exist" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.215106 4827 scope.go:117] "RemoveContainer" containerID="2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3" Jan 26 09:50:19 crc kubenswrapper[4827]: E0126 09:50:19.218901 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3\": container with ID starting with 2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3 not found: ID does not exist" containerID="2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.218938 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3"} err="failed to get container status \"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3\": rpc error: code = NotFound desc = could not find container \"2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3\": container with ID starting with 2f45a36ce413ae6c7cced67817b7200f481b30e48e3f1b4763e2fbbfb594e1b3 not found: ID does not exist" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.218965 4827 scope.go:117] "RemoveContainer" containerID="7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57" Jan 26 09:50:19 crc kubenswrapper[4827]: E0126 09:50:19.219172 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57\": container with ID starting with 7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57 not found: ID does not exist" containerID="7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.219197 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57"} err="failed to get container status \"7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57\": rpc error: code = NotFound desc = could not find container \"7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57\": container with ID starting with 7e56cb94df6d150b1251238559c1eac9cd27550ab1342737a4b1429c694edb57 not found: ID does not exist" Jan 26 09:50:19 crc kubenswrapper[4827]: I0126 09:50:19.715462 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4362eae-b6c8-415e-bc91-db54839624de" path="/var/lib/kubelet/pods/d4362eae-b6c8-415e-bc91-db54839624de/volumes" Jan 26 09:51:12 crc kubenswrapper[4827]: I0126 09:51:12.268931 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:51:12 crc kubenswrapper[4827]: I0126 09:51:12.269542 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:51:42 crc kubenswrapper[4827]: I0126 09:51:42.271152 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:51:42 crc kubenswrapper[4827]: I0126 09:51:42.271574 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:51:44 crc kubenswrapper[4827]: I0126 09:51:44.888174 4827 generic.go:334] "Generic (PLEG): container finished" podID="46064584-0d9c-4054-87ce-e417f22cd6ad" containerID="90e314fe3fff6709a9a86c4df6c628eafd0d23cf2803ec6f82ed0db0d139e014" exitCode=0 Jan 26 09:51:44 crc kubenswrapper[4827]: I0126 09:51:44.888226 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" event={"ID":"46064584-0d9c-4054-87ce-e417f22cd6ad","Type":"ContainerDied","Data":"90e314fe3fff6709a9a86c4df6c628eafd0d23cf2803ec6f82ed0db0d139e014"} Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.296132 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.455027 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.455280 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.455481 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.456309 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.456517 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.456650 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrvd\" (UniqueName: \"kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd\") pod \"46064584-0d9c-4054-87ce-e417f22cd6ad\" (UID: \"46064584-0d9c-4054-87ce-e417f22cd6ad\") " Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.461692 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.462928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph" (OuterVolumeSpecName: "ceph") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.464833 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd" (OuterVolumeSpecName: "kube-api-access-gjrvd") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "kube-api-access-gjrvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.481591 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.495681 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.497889 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory" (OuterVolumeSpecName: "inventory") pod "46064584-0d9c-4054-87ce-e417f22cd6ad" (UID: "46064584-0d9c-4054-87ce-e417f22cd6ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.558784 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.559097 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.559118 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.559131 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjrvd\" (UniqueName: \"kubernetes.io/projected/46064584-0d9c-4054-87ce-e417f22cd6ad-kube-api-access-gjrvd\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.559144 4827 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/46064584-0d9c-4054-87ce-e417f22cd6ad-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.559152 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46064584-0d9c-4054-87ce-e417f22cd6ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.907550 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" event={"ID":"46064584-0d9c-4054-87ce-e417f22cd6ad","Type":"ContainerDied","Data":"39dfb400d4f3cdad88500bc46e63699cdafa21a338962f07871b442a2faafc65"} Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.907621 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39dfb400d4f3cdad88500bc46e63699cdafa21a338962f07871b442a2faafc65" Jan 26 09:51:46 crc kubenswrapper[4827]: I0126 09:51:46.907704 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xstrh" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.066355 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q"] Jan 26 09:51:47 crc kubenswrapper[4827]: E0126 09:51:47.066887 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46064584-0d9c-4054-87ce-e417f22cd6ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.066917 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46064584-0d9c-4054-87ce-e417f22cd6ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 09:51:47 crc kubenswrapper[4827]: E0126 09:51:47.066940 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="extract-content" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.066948 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="extract-content" Jan 26 09:51:47 crc kubenswrapper[4827]: E0126 09:51:47.066962 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="registry-server" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.066970 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="registry-server" Jan 26 09:51:47 crc kubenswrapper[4827]: E0126 09:51:47.066989 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="extract-utilities" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.066996 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="extract-utilities" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.067210 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46064584-0d9c-4054-87ce-e417f22cd6ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.067229 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4362eae-b6c8-415e-bc91-db54839624de" containerName="registry-server" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.067980 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.070328 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.070828 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.071291 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.071343 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.072219 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.072365 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.077453 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.093380 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q"] Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.168997 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdn2n\" (UniqueName: \"kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169123 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169163 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169204 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169231 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.169271 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271279 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271395 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271417 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271463 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271502 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdn2n\" (UniqueName: \"kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.271562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.276938 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.277388 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.277906 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.278761 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.278792 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.284287 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.286867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdn2n\" (UniqueName: \"kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.393272 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:51:47 crc kubenswrapper[4827]: I0126 09:51:47.996754 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:51:48 crc kubenswrapper[4827]: I0126 09:51:48.001409 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q"] Jan 26 09:51:48 crc kubenswrapper[4827]: I0126 09:51:48.929657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" event={"ID":"fd1780e3-c584-4f74-b260-cec896594153","Type":"ContainerStarted","Data":"70f7c22da752284226e6d17dc1fca54d2e512dfa7904a71955677de16f4f2509"} Jan 26 09:51:48 crc kubenswrapper[4827]: I0126 09:51:48.930025 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" event={"ID":"fd1780e3-c584-4f74-b260-cec896594153","Type":"ContainerStarted","Data":"96a65afda489d3ff751adb830678e18642b29713cf4870e087ff1a963dd19077"} Jan 26 09:51:48 crc kubenswrapper[4827]: I0126 09:51:48.959300 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" podStartSLOduration=1.5093367340000001 podStartE2EDuration="1.959285257s" podCreationTimestamp="2026-01-26 09:51:47 +0000 UTC" firstStartedPulling="2026-01-26 09:51:47.996280822 +0000 UTC m=+2736.644952641" lastFinishedPulling="2026-01-26 09:51:48.446229305 +0000 UTC m=+2737.094901164" observedRunningTime="2026-01-26 09:51:48.957436065 +0000 UTC m=+2737.606107884" watchObservedRunningTime="2026-01-26 09:51:48.959285257 +0000 UTC m=+2737.607957066" Jan 26 09:52:12 crc kubenswrapper[4827]: I0126 09:52:12.268873 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:52:12 crc kubenswrapper[4827]: I0126 09:52:12.269554 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:52:12 crc kubenswrapper[4827]: I0126 09:52:12.269613 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:52:12 crc kubenswrapper[4827]: I0126 09:52:12.270606 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:52:12 crc kubenswrapper[4827]: I0126 09:52:12.270721 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb" gracePeriod=600 Jan 26 09:52:13 crc kubenswrapper[4827]: I0126 09:52:13.167869 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb" exitCode=0 Jan 26 09:52:13 crc kubenswrapper[4827]: I0126 09:52:13.168303 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb"} Jan 26 09:52:13 crc kubenswrapper[4827]: I0126 09:52:13.168327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef"} Jan 26 09:52:13 crc kubenswrapper[4827]: I0126 09:52:13.168354 4827 scope.go:117] "RemoveContainer" containerID="80a8a70bd3c8284f7643cf84dfac23e74aeedc1538ed484db2602ba1dcf17c5e" Jan 26 09:52:24 crc kubenswrapper[4827]: I0126 09:52:24.982609 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:24 crc kubenswrapper[4827]: I0126 09:52:24.985770 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.004760 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.102910 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.103042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.103122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctpg\" (UniqueName: \"kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.204594 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctpg\" (UniqueName: \"kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.204756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.204808 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.205322 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.205362 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.236757 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctpg\" (UniqueName: \"kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg\") pod \"certified-operators-lwbtg\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.310887 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:25 crc kubenswrapper[4827]: I0126 09:52:25.843887 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:25 crc kubenswrapper[4827]: W0126 09:52:25.852680 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb541b313_3f02_466e_8ecd_9a93aef18b01.slice/crio-97396c4d06b3bf6defad6862b7b458df61fe42cf7fa87bb1b858bcc39a9abecf WatchSource:0}: Error finding container 97396c4d06b3bf6defad6862b7b458df61fe42cf7fa87bb1b858bcc39a9abecf: Status 404 returned error can't find the container with id 97396c4d06b3bf6defad6862b7b458df61fe42cf7fa87bb1b858bcc39a9abecf Jan 26 09:52:26 crc kubenswrapper[4827]: I0126 09:52:26.280262 4827 generic.go:334] "Generic (PLEG): container finished" podID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerID="e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a" exitCode=0 Jan 26 09:52:26 crc kubenswrapper[4827]: I0126 09:52:26.280572 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerDied","Data":"e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a"} Jan 26 09:52:26 crc kubenswrapper[4827]: I0126 09:52:26.280602 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerStarted","Data":"97396c4d06b3bf6defad6862b7b458df61fe42cf7fa87bb1b858bcc39a9abecf"} Jan 26 09:52:27 crc kubenswrapper[4827]: I0126 09:52:27.289281 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerStarted","Data":"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99"} Jan 26 09:52:28 crc kubenswrapper[4827]: I0126 09:52:28.302117 4827 generic.go:334] "Generic (PLEG): container finished" podID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerID="9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99" exitCode=0 Jan 26 09:52:28 crc kubenswrapper[4827]: I0126 09:52:28.302162 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerDied","Data":"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99"} Jan 26 09:52:29 crc kubenswrapper[4827]: I0126 09:52:29.313521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerStarted","Data":"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6"} Jan 26 09:52:29 crc kubenswrapper[4827]: I0126 09:52:29.346874 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lwbtg" podStartSLOduration=2.880941972 podStartE2EDuration="5.346846799s" podCreationTimestamp="2026-01-26 09:52:24 +0000 UTC" firstStartedPulling="2026-01-26 09:52:26.283449577 +0000 UTC m=+2774.932121406" lastFinishedPulling="2026-01-26 09:52:28.749354414 +0000 UTC m=+2777.398026233" observedRunningTime="2026-01-26 09:52:29.341576244 +0000 UTC m=+2777.990248083" watchObservedRunningTime="2026-01-26 09:52:29.346846799 +0000 UTC m=+2777.995518648" Jan 26 09:52:35 crc kubenswrapper[4827]: I0126 09:52:35.311381 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:35 crc kubenswrapper[4827]: I0126 09:52:35.313051 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:35 crc kubenswrapper[4827]: I0126 09:52:35.384917 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:36 crc kubenswrapper[4827]: I0126 09:52:36.409600 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:36 crc kubenswrapper[4827]: I0126 09:52:36.474575 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:38 crc kubenswrapper[4827]: I0126 09:52:38.382975 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lwbtg" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="registry-server" containerID="cri-o://9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6" gracePeriod=2 Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.282570 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.370794 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content\") pod \"b541b313-3f02-466e-8ecd-9a93aef18b01\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.371101 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities\") pod \"b541b313-3f02-466e-8ecd-9a93aef18b01\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.371284 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctpg\" (UniqueName: \"kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg\") pod \"b541b313-3f02-466e-8ecd-9a93aef18b01\" (UID: \"b541b313-3f02-466e-8ecd-9a93aef18b01\") " Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.373688 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities" (OuterVolumeSpecName: "utilities") pod "b541b313-3f02-466e-8ecd-9a93aef18b01" (UID: "b541b313-3f02-466e-8ecd-9a93aef18b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.378556 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg" (OuterVolumeSpecName: "kube-api-access-5ctpg") pod "b541b313-3f02-466e-8ecd-9a93aef18b01" (UID: "b541b313-3f02-466e-8ecd-9a93aef18b01"). InnerVolumeSpecName "kube-api-access-5ctpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.396325 4827 generic.go:334] "Generic (PLEG): container finished" podID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerID="9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6" exitCode=0 Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.396376 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerDied","Data":"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6"} Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.396414 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwbtg" event={"ID":"b541b313-3f02-466e-8ecd-9a93aef18b01","Type":"ContainerDied","Data":"97396c4d06b3bf6defad6862b7b458df61fe42cf7fa87bb1b858bcc39a9abecf"} Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.396435 4827 scope.go:117] "RemoveContainer" containerID="9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.397474 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwbtg" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.429008 4827 scope.go:117] "RemoveContainer" containerID="9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.446698 4827 scope.go:117] "RemoveContainer" containerID="e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.449331 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b541b313-3f02-466e-8ecd-9a93aef18b01" (UID: "b541b313-3f02-466e-8ecd-9a93aef18b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.473407 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.473444 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b541b313-3f02-466e-8ecd-9a93aef18b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.473460 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctpg\" (UniqueName: \"kubernetes.io/projected/b541b313-3f02-466e-8ecd-9a93aef18b01-kube-api-access-5ctpg\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.486493 4827 scope.go:117] "RemoveContainer" containerID="9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6" Jan 26 09:52:39 crc kubenswrapper[4827]: E0126 09:52:39.486977 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6\": container with ID starting with 9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6 not found: ID does not exist" containerID="9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.487006 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6"} err="failed to get container status \"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6\": rpc error: code = NotFound desc = could not find container \"9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6\": container with ID starting with 9b4ca49bb629dd9352fec62d74d00c2f7827414e6357ea09bf99b4e7537f28b6 not found: ID does not exist" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.487026 4827 scope.go:117] "RemoveContainer" containerID="9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99" Jan 26 09:52:39 crc kubenswrapper[4827]: E0126 09:52:39.487434 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99\": container with ID starting with 9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99 not found: ID does not exist" containerID="9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.487526 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99"} err="failed to get container status \"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99\": rpc error: code = NotFound desc = could not find container \"9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99\": container with ID starting with 9433a5686f0e1dcef1454279510cbfbd6b5909c069aae9cb7dd59958594a3c99 not found: ID does not exist" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.487606 4827 scope.go:117] "RemoveContainer" containerID="e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a" Jan 26 09:52:39 crc kubenswrapper[4827]: E0126 09:52:39.487979 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a\": container with ID starting with e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a not found: ID does not exist" containerID="e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.488009 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a"} err="failed to get container status \"e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a\": rpc error: code = NotFound desc = could not find container \"e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a\": container with ID starting with e65d623c117879e4165460c48d8789904c2d6bf5aa61efdf449de356d6d95d9a not found: ID does not exist" Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.736424 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:39 crc kubenswrapper[4827]: I0126 09:52:39.745243 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lwbtg"] Jan 26 09:52:41 crc kubenswrapper[4827]: I0126 09:52:41.775724 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" path="/var/lib/kubelet/pods/b541b313-3f02-466e-8ecd-9a93aef18b01/volumes" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.184775 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:44 crc kubenswrapper[4827]: E0126 09:52:44.186326 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="extract-utilities" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.186426 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="extract-utilities" Jan 26 09:52:44 crc kubenswrapper[4827]: E0126 09:52:44.186538 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="registry-server" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.186650 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="registry-server" Jan 26 09:52:44 crc kubenswrapper[4827]: E0126 09:52:44.186739 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="extract-content" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.186832 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="extract-content" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.187142 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b541b313-3f02-466e-8ecd-9a93aef18b01" containerName="registry-server" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.188701 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.201759 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.371443 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bpwl\" (UniqueName: \"kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.371838 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.371863 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.472920 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bpwl\" (UniqueName: \"kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.473028 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.473058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.473587 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.473725 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.503331 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bpwl\" (UniqueName: \"kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl\") pod \"redhat-marketplace-5q7r2\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:44 crc kubenswrapper[4827]: I0126 09:52:44.522913 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:45 crc kubenswrapper[4827]: I0126 09:52:45.028460 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:45 crc kubenswrapper[4827]: I0126 09:52:45.445998 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerID="7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7" exitCode=0 Jan 26 09:52:45 crc kubenswrapper[4827]: I0126 09:52:45.446181 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerDied","Data":"7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7"} Jan 26 09:52:45 crc kubenswrapper[4827]: I0126 09:52:45.446314 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerStarted","Data":"276c10565b7f47233772d2e40d895caf6ad55953a0731c681398bb676630386d"} Jan 26 09:52:46 crc kubenswrapper[4827]: I0126 09:52:46.455920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerStarted","Data":"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc"} Jan 26 09:52:47 crc kubenswrapper[4827]: I0126 09:52:47.466100 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerID="5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc" exitCode=0 Jan 26 09:52:47 crc kubenswrapper[4827]: I0126 09:52:47.466185 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerDied","Data":"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc"} Jan 26 09:52:48 crc kubenswrapper[4827]: I0126 09:52:48.477898 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerStarted","Data":"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3"} Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.523789 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.524175 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.570986 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.589656 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5q7r2" podStartSLOduration=8.167675962 podStartE2EDuration="10.58961927s" podCreationTimestamp="2026-01-26 09:52:44 +0000 UTC" firstStartedPulling="2026-01-26 09:52:45.450730866 +0000 UTC m=+2794.099402685" lastFinishedPulling="2026-01-26 09:52:47.872674144 +0000 UTC m=+2796.521345993" observedRunningTime="2026-01-26 09:52:48.496090492 +0000 UTC m=+2797.144762311" watchObservedRunningTime="2026-01-26 09:52:54.58961927 +0000 UTC m=+2803.238291099" Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.633619 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:54 crc kubenswrapper[4827]: I0126 09:52:54.810690 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:56 crc kubenswrapper[4827]: I0126 09:52:56.561407 4827 generic.go:334] "Generic (PLEG): container finished" podID="fd1780e3-c584-4f74-b260-cec896594153" containerID="70f7c22da752284226e6d17dc1fca54d2e512dfa7904a71955677de16f4f2509" exitCode=0 Jan 26 09:52:56 crc kubenswrapper[4827]: I0126 09:52:56.561510 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" event={"ID":"fd1780e3-c584-4f74-b260-cec896594153","Type":"ContainerDied","Data":"70f7c22da752284226e6d17dc1fca54d2e512dfa7904a71955677de16f4f2509"} Jan 26 09:52:56 crc kubenswrapper[4827]: I0126 09:52:56.562017 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5q7r2" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="registry-server" containerID="cri-o://a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3" gracePeriod=2 Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.551015 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.574872 4827 generic.go:334] "Generic (PLEG): container finished" podID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerID="a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3" exitCode=0 Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.575006 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q7r2" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.575052 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerDied","Data":"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3"} Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.575078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q7r2" event={"ID":"9fde096b-f970-4947-8d9c-7a055b94fdbe","Type":"ContainerDied","Data":"276c10565b7f47233772d2e40d895caf6ad55953a0731c681398bb676630386d"} Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.575094 4827 scope.go:117] "RemoveContainer" containerID="a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.622914 4827 scope.go:117] "RemoveContainer" containerID="5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.647137 4827 scope.go:117] "RemoveContainer" containerID="7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.684181 4827 scope.go:117] "RemoveContainer" containerID="a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3" Jan 26 09:52:57 crc kubenswrapper[4827]: E0126 09:52:57.684703 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3\": container with ID starting with a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3 not found: ID does not exist" containerID="a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.684736 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3"} err="failed to get container status \"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3\": rpc error: code = NotFound desc = could not find container \"a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3\": container with ID starting with a45e538d7d5974e6db83d9309e74767b9110a39edf2646c4d4e3fbf1f52f8aa3 not found: ID does not exist" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.684756 4827 scope.go:117] "RemoveContainer" containerID="5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc" Jan 26 09:52:57 crc kubenswrapper[4827]: E0126 09:52:57.685064 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc\": container with ID starting with 5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc not found: ID does not exist" containerID="5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.685084 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc"} err="failed to get container status \"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc\": rpc error: code = NotFound desc = could not find container \"5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc\": container with ID starting with 5ed5308a21cfb655240dc4d55301d613f6e0403b85b3363b891b582de8435dfc not found: ID does not exist" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.685096 4827 scope.go:117] "RemoveContainer" containerID="7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7" Jan 26 09:52:57 crc kubenswrapper[4827]: E0126 09:52:57.685497 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7\": container with ID starting with 7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7 not found: ID does not exist" containerID="7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.685517 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7"} err="failed to get container status \"7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7\": rpc error: code = NotFound desc = could not find container \"7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7\": container with ID starting with 7b3c6867306ab1db399cb3749e4e83be9925e24d18f83ee5fbb948660c23b0b7 not found: ID does not exist" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.726319 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities\") pod \"9fde096b-f970-4947-8d9c-7a055b94fdbe\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.726371 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content\") pod \"9fde096b-f970-4947-8d9c-7a055b94fdbe\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.726472 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bpwl\" (UniqueName: \"kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl\") pod \"9fde096b-f970-4947-8d9c-7a055b94fdbe\" (UID: \"9fde096b-f970-4947-8d9c-7a055b94fdbe\") " Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.729735 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities" (OuterVolumeSpecName: "utilities") pod "9fde096b-f970-4947-8d9c-7a055b94fdbe" (UID: "9fde096b-f970-4947-8d9c-7a055b94fdbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.732572 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl" (OuterVolumeSpecName: "kube-api-access-2bpwl") pod "9fde096b-f970-4947-8d9c-7a055b94fdbe" (UID: "9fde096b-f970-4947-8d9c-7a055b94fdbe"). InnerVolumeSpecName "kube-api-access-2bpwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.752691 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fde096b-f970-4947-8d9c-7a055b94fdbe" (UID: "9fde096b-f970-4947-8d9c-7a055b94fdbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.828306 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.828343 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde096b-f970-4947-8d9c-7a055b94fdbe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.828357 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bpwl\" (UniqueName: \"kubernetes.io/projected/9fde096b-f970-4947-8d9c-7a055b94fdbe-kube-api-access-2bpwl\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.910481 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.919231 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q7r2"] Jan 26 09:52:57 crc kubenswrapper[4827]: I0126 09:52:57.983262 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.133625 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.133966 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdn2n\" (UniqueName: \"kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.134033 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.134058 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.134516 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.134619 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.135082 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0\") pod \"fd1780e3-c584-4f74-b260-cec896594153\" (UID: \"fd1780e3-c584-4f74-b260-cec896594153\") " Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.138935 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.138993 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph" (OuterVolumeSpecName: "ceph") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.139374 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n" (OuterVolumeSpecName: "kube-api-access-xdn2n") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "kube-api-access-xdn2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.160479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.162361 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.167604 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory" (OuterVolumeSpecName: "inventory") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.167775 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd1780e3-c584-4f74-b260-cec896594153" (UID: "fd1780e3-c584-4f74-b260-cec896594153"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262670 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262703 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdn2n\" (UniqueName: \"kubernetes.io/projected/fd1780e3-c584-4f74-b260-cec896594153-kube-api-access-xdn2n\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262713 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262722 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262730 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262739 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.262751 4827 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fd1780e3-c584-4f74-b260-cec896594153-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.589899 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" event={"ID":"fd1780e3-c584-4f74-b260-cec896594153","Type":"ContainerDied","Data":"96a65afda489d3ff751adb830678e18642b29713cf4870e087ff1a963dd19077"} Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.589939 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a65afda489d3ff751adb830678e18642b29713cf4870e087ff1a963dd19077" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.589987 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.729687 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92"] Jan 26 09:52:58 crc kubenswrapper[4827]: E0126 09:52:58.731198 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1780e3-c584-4f74-b260-cec896594153" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731235 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1780e3-c584-4f74-b260-cec896594153" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 09:52:58 crc kubenswrapper[4827]: E0126 09:52:58.731244 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="extract-utilities" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731250 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="extract-utilities" Jan 26 09:52:58 crc kubenswrapper[4827]: E0126 09:52:58.731270 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="extract-content" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731276 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="extract-content" Jan 26 09:52:58 crc kubenswrapper[4827]: E0126 09:52:58.731317 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="registry-server" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731325 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="registry-server" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731530 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" containerName="registry-server" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.731558 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1780e3-c584-4f74-b260-cec896594153" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.732337 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.734744 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.735346 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.736006 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.737509 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.743481 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.744051 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.754234 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92"] Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873174 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873303 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873350 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4h77\" (UniqueName: \"kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873379 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.873417 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974663 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974704 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974740 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h77\" (UniqueName: \"kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974793 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.974853 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.979471 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.980431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.990158 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.991248 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:58 crc kubenswrapper[4827]: I0126 09:52:58.992296 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:59 crc kubenswrapper[4827]: I0126 09:52:59.003038 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h77\" (UniqueName: \"kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gfc92\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:59 crc kubenswrapper[4827]: I0126 09:52:59.080454 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:52:59 crc kubenswrapper[4827]: I0126 09:52:59.430918 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92"] Jan 26 09:52:59 crc kubenswrapper[4827]: W0126 09:52:59.433172 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f2e9aa2_d136_40ad_a382_41abb6ce645a.slice/crio-27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b WatchSource:0}: Error finding container 27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b: Status 404 returned error can't find the container with id 27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b Jan 26 09:52:59 crc kubenswrapper[4827]: I0126 09:52:59.618237 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" event={"ID":"9f2e9aa2-d136-40ad-a382-41abb6ce645a","Type":"ContainerStarted","Data":"27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b"} Jan 26 09:52:59 crc kubenswrapper[4827]: I0126 09:52:59.712901 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fde096b-f970-4947-8d9c-7a055b94fdbe" path="/var/lib/kubelet/pods/9fde096b-f970-4947-8d9c-7a055b94fdbe/volumes" Jan 26 09:53:00 crc kubenswrapper[4827]: I0126 09:53:00.633390 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" event={"ID":"9f2e9aa2-d136-40ad-a382-41abb6ce645a","Type":"ContainerStarted","Data":"50c9cc28f4b2a54a013acc1b7048eb2c9e9c08efefb8167e1785191fd0d30aee"} Jan 26 09:53:00 crc kubenswrapper[4827]: I0126 09:53:00.674892 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" podStartSLOduration=2.277842415 podStartE2EDuration="2.674856642s" podCreationTimestamp="2026-01-26 09:52:58 +0000 UTC" firstStartedPulling="2026-01-26 09:52:59.43510451 +0000 UTC m=+2808.083776329" lastFinishedPulling="2026-01-26 09:52:59.832118737 +0000 UTC m=+2808.480790556" observedRunningTime="2026-01-26 09:53:00.658504792 +0000 UTC m=+2809.307176621" watchObservedRunningTime="2026-01-26 09:53:00.674856642 +0000 UTC m=+2809.323528501" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.302475 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.305530 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.330411 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.438285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.438364 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.438423 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kwg\" (UniqueName: \"kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.539766 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.540057 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.540203 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kwg\" (UniqueName: \"kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.540368 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.540383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.557986 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kwg\" (UniqueName: \"kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg\") pod \"community-operators-c45rz\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.625420 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:02 crc kubenswrapper[4827]: I0126 09:54:02.929766 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:03 crc kubenswrapper[4827]: I0126 09:54:03.244813 4827 generic.go:334] "Generic (PLEG): container finished" podID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerID="650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed" exitCode=0 Jan 26 09:54:03 crc kubenswrapper[4827]: I0126 09:54:03.244891 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerDied","Data":"650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed"} Jan 26 09:54:03 crc kubenswrapper[4827]: I0126 09:54:03.244930 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerStarted","Data":"9e7bd9d813f54b24d38e81f2a9bb2832559aeee0b6bf7d94c13202905602a83d"} Jan 26 09:54:04 crc kubenswrapper[4827]: I0126 09:54:04.254349 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerStarted","Data":"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f"} Jan 26 09:54:06 crc kubenswrapper[4827]: I0126 09:54:06.274800 4827 generic.go:334] "Generic (PLEG): container finished" podID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerID="0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f" exitCode=0 Jan 26 09:54:06 crc kubenswrapper[4827]: I0126 09:54:06.274868 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerDied","Data":"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f"} Jan 26 09:54:07 crc kubenswrapper[4827]: I0126 09:54:07.287098 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerStarted","Data":"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2"} Jan 26 09:54:07 crc kubenswrapper[4827]: I0126 09:54:07.314405 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c45rz" podStartSLOduration=1.84988676 podStartE2EDuration="5.31437734s" podCreationTimestamp="2026-01-26 09:54:02 +0000 UTC" firstStartedPulling="2026-01-26 09:54:03.246498042 +0000 UTC m=+2871.895169891" lastFinishedPulling="2026-01-26 09:54:06.710988642 +0000 UTC m=+2875.359660471" observedRunningTime="2026-01-26 09:54:07.308342323 +0000 UTC m=+2875.957014152" watchObservedRunningTime="2026-01-26 09:54:07.31437734 +0000 UTC m=+2875.963049169" Jan 26 09:54:12 crc kubenswrapper[4827]: I0126 09:54:12.268500 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:54:12 crc kubenswrapper[4827]: I0126 09:54:12.270652 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:54:12 crc kubenswrapper[4827]: I0126 09:54:12.626392 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:12 crc kubenswrapper[4827]: I0126 09:54:12.626989 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:12 crc kubenswrapper[4827]: I0126 09:54:12.713033 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:13 crc kubenswrapper[4827]: I0126 09:54:13.419962 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:14 crc kubenswrapper[4827]: I0126 09:54:14.073728 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.373555 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c45rz" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="registry-server" containerID="cri-o://e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2" gracePeriod=2 Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.845444 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.949674 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content\") pod \"0bcbe994-df32-417d-9f51-9b420e3b83cc\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.949726 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7kwg\" (UniqueName: \"kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg\") pod \"0bcbe994-df32-417d-9f51-9b420e3b83cc\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.949834 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities\") pod \"0bcbe994-df32-417d-9f51-9b420e3b83cc\" (UID: \"0bcbe994-df32-417d-9f51-9b420e3b83cc\") " Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.950883 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities" (OuterVolumeSpecName: "utilities") pod "0bcbe994-df32-417d-9f51-9b420e3b83cc" (UID: "0bcbe994-df32-417d-9f51-9b420e3b83cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:54:15 crc kubenswrapper[4827]: I0126 09:54:15.956195 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg" (OuterVolumeSpecName: "kube-api-access-q7kwg") pod "0bcbe994-df32-417d-9f51-9b420e3b83cc" (UID: "0bcbe994-df32-417d-9f51-9b420e3b83cc"). InnerVolumeSpecName "kube-api-access-q7kwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.013927 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bcbe994-df32-417d-9f51-9b420e3b83cc" (UID: "0bcbe994-df32-417d-9f51-9b420e3b83cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.051685 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.051719 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7kwg\" (UniqueName: \"kubernetes.io/projected/0bcbe994-df32-417d-9f51-9b420e3b83cc-kube-api-access-q7kwg\") on node \"crc\" DevicePath \"\"" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.051735 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcbe994-df32-417d-9f51-9b420e3b83cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.382732 4827 generic.go:334] "Generic (PLEG): container finished" podID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerID="e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2" exitCode=0 Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.382777 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerDied","Data":"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2"} Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.382797 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c45rz" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.382940 4827 scope.go:117] "RemoveContainer" containerID="e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.382916 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c45rz" event={"ID":"0bcbe994-df32-417d-9f51-9b420e3b83cc","Type":"ContainerDied","Data":"9e7bd9d813f54b24d38e81f2a9bb2832559aeee0b6bf7d94c13202905602a83d"} Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.423954 4827 scope.go:117] "RemoveContainer" containerID="0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.458100 4827 scope.go:117] "RemoveContainer" containerID="650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.466874 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.501245 4827 scope.go:117] "RemoveContainer" containerID="e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2" Jan 26 09:54:16 crc kubenswrapper[4827]: E0126 09:54:16.504572 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2\": container with ID starting with e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2 not found: ID does not exist" containerID="e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.504604 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2"} err="failed to get container status \"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2\": rpc error: code = NotFound desc = could not find container \"e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2\": container with ID starting with e03b000d0c66cf3f55339e09347570b8631b831a7ca2f4743907fc0555a503b2 not found: ID does not exist" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.504628 4827 scope.go:117] "RemoveContainer" containerID="0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.508456 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c45rz"] Jan 26 09:54:16 crc kubenswrapper[4827]: E0126 09:54:16.508933 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f\": container with ID starting with 0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f not found: ID does not exist" containerID="0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.508992 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f"} err="failed to get container status \"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f\": rpc error: code = NotFound desc = could not find container \"0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f\": container with ID starting with 0a51c9adc11f3ba00a619c2ce836a8806feeaaf1b0082587a1ca13c69c32e36f not found: ID does not exist" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.509023 4827 scope.go:117] "RemoveContainer" containerID="650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed" Jan 26 09:54:16 crc kubenswrapper[4827]: E0126 09:54:16.509761 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed\": container with ID starting with 650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed not found: ID does not exist" containerID="650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed" Jan 26 09:54:16 crc kubenswrapper[4827]: I0126 09:54:16.509787 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed"} err="failed to get container status \"650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed\": rpc error: code = NotFound desc = could not find container \"650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed\": container with ID starting with 650170cbaef187515e9ea9f0d7d3a3b4221091772b1acaf8c49758c6d51813ed not found: ID does not exist" Jan 26 09:54:17 crc kubenswrapper[4827]: I0126 09:54:17.720064 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" path="/var/lib/kubelet/pods/0bcbe994-df32-417d-9f51-9b420e3b83cc/volumes" Jan 26 09:54:42 crc kubenswrapper[4827]: I0126 09:54:42.269237 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:54:42 crc kubenswrapper[4827]: I0126 09:54:42.269780 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.268761 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.269560 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.269675 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.270725 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.270822 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" gracePeriod=600 Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.844509 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" exitCode=0 Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.844579 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef"} Jan 26 09:55:12 crc kubenswrapper[4827]: I0126 09:55:12.844861 4827 scope.go:117] "RemoveContainer" containerID="ed72a3ee810d01b128767bddb02d7879aa5fc81f9eff8b19b5f90f15292371fb" Jan 26 09:55:13 crc kubenswrapper[4827]: E0126 09:55:13.051498 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:55:13 crc kubenswrapper[4827]: I0126 09:55:13.861631 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:55:13 crc kubenswrapper[4827]: E0126 09:55:13.861938 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:55:24 crc kubenswrapper[4827]: I0126 09:55:24.703067 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:55:24 crc kubenswrapper[4827]: E0126 09:55:24.703700 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:55:39 crc kubenswrapper[4827]: I0126 09:55:39.703066 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:55:39 crc kubenswrapper[4827]: E0126 09:55:39.705555 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:55:51 crc kubenswrapper[4827]: I0126 09:55:51.716633 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:55:51 crc kubenswrapper[4827]: E0126 09:55:51.717603 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:56:06 crc kubenswrapper[4827]: I0126 09:56:06.702596 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:56:06 crc kubenswrapper[4827]: E0126 09:56:06.703431 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:56:19 crc kubenswrapper[4827]: I0126 09:56:19.703564 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:56:19 crc kubenswrapper[4827]: E0126 09:56:19.705541 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:56:33 crc kubenswrapper[4827]: I0126 09:56:33.703365 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:56:33 crc kubenswrapper[4827]: E0126 09:56:33.704057 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:56:46 crc kubenswrapper[4827]: I0126 09:56:46.703317 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:56:46 crc kubenswrapper[4827]: E0126 09:56:46.704045 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:57:00 crc kubenswrapper[4827]: I0126 09:57:00.703820 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:57:00 crc kubenswrapper[4827]: E0126 09:57:00.704761 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:57:13 crc kubenswrapper[4827]: I0126 09:57:13.703724 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:57:13 crc kubenswrapper[4827]: E0126 09:57:13.705086 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:57:27 crc kubenswrapper[4827]: I0126 09:57:27.703711 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:57:27 crc kubenswrapper[4827]: E0126 09:57:27.704402 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:57:41 crc kubenswrapper[4827]: I0126 09:57:41.713747 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:57:41 crc kubenswrapper[4827]: E0126 09:57:41.714880 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:57:53 crc kubenswrapper[4827]: I0126 09:57:53.703104 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:57:53 crc kubenswrapper[4827]: E0126 09:57:53.705198 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:58:05 crc kubenswrapper[4827]: I0126 09:58:05.702708 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:58:05 crc kubenswrapper[4827]: E0126 09:58:05.703497 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:58:11 crc kubenswrapper[4827]: I0126 09:58:11.596953 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f2e9aa2-d136-40ad-a382-41abb6ce645a" containerID="50c9cc28f4b2a54a013acc1b7048eb2c9e9c08efefb8167e1785191fd0d30aee" exitCode=0 Jan 26 09:58:11 crc kubenswrapper[4827]: I0126 09:58:11.599146 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" event={"ID":"9f2e9aa2-d136-40ad-a382-41abb6ce645a","Type":"ContainerDied","Data":"50c9cc28f4b2a54a013acc1b7048eb2c9e9c08efefb8167e1785191fd0d30aee"} Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.145548 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.284551 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.284683 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.284844 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.284910 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.285041 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4h77\" (UniqueName: \"kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.285377 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory\") pod \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\" (UID: \"9f2e9aa2-d136-40ad-a382-41abb6ce645a\") " Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.294554 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77" (OuterVolumeSpecName: "kube-api-access-p4h77") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "kube-api-access-p4h77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.296046 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph" (OuterVolumeSpecName: "ceph") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.300796 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.313443 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.337719 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory" (OuterVolumeSpecName: "inventory") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.340380 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9f2e9aa2-d136-40ad-a382-41abb6ce645a" (UID: "9f2e9aa2-d136-40ad-a382-41abb6ce645a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388488 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388544 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388573 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4h77\" (UniqueName: \"kubernetes.io/projected/9f2e9aa2-d136-40ad-a382-41abb6ce645a-kube-api-access-p4h77\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388597 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388620 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.388680 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2e9aa2-d136-40ad-a382-41abb6ce645a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.621578 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" event={"ID":"9f2e9aa2-d136-40ad-a382-41abb6ce645a","Type":"ContainerDied","Data":"27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b"} Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.621612 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gfc92" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.621631 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dc566cfb895c61add8e66a0b0a1e4a4812328bb5dedbbbdffdbf689ba6e75b" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.781179 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v"] Jan 26 09:58:13 crc kubenswrapper[4827]: E0126 09:58:13.782532 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="extract-content" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782555 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="extract-content" Jan 26 09:58:13 crc kubenswrapper[4827]: E0126 09:58:13.782575 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="extract-utilities" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782586 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="extract-utilities" Jan 26 09:58:13 crc kubenswrapper[4827]: E0126 09:58:13.782675 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2e9aa2-d136-40ad-a382-41abb6ce645a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782687 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2e9aa2-d136-40ad-a382-41abb6ce645a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 09:58:13 crc kubenswrapper[4827]: E0126 09:58:13.782705 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="registry-server" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782714 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="registry-server" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782908 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcbe994-df32-417d-9f51-9b420e3b83cc" containerName="registry-server" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.782928 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2e9aa2-d136-40ad-a382-41abb6ce645a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.783626 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.789760 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.789865 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.789971 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.790149 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.790185 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.790228 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.790356 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.790505 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.791174 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xm22l" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.794632 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v"] Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803153 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803193 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbpd\" (UniqueName: \"kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803237 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803256 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803281 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803316 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803354 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803370 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.803391 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.811778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.811855 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.913845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.913900 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.913943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.913978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914061 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914093 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914130 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914233 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.914260 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbpd\" (UniqueName: \"kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.915688 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.918772 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.919681 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.919715 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.920240 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.920264 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.920993 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.921694 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.923347 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.923710 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:13 crc kubenswrapper[4827]: I0126 09:58:13.936196 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbpd\" (UniqueName: \"kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:14 crc kubenswrapper[4827]: I0126 09:58:14.103951 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 09:58:14 crc kubenswrapper[4827]: I0126 09:58:14.687599 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v"] Jan 26 09:58:14 crc kubenswrapper[4827]: I0126 09:58:14.704729 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:58:15 crc kubenswrapper[4827]: I0126 09:58:15.644133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" event={"ID":"62a102b4-e915-4a42-a644-91624460cb06","Type":"ContainerStarted","Data":"430e848d95ffe0cd4b9c6e08e844c58f64c68f23312296143cdd2995918ab5e1"} Jan 26 09:58:15 crc kubenswrapper[4827]: I0126 09:58:15.644619 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" event={"ID":"62a102b4-e915-4a42-a644-91624460cb06","Type":"ContainerStarted","Data":"925c1b456564796428dfaa4190c953f00c475efb633c7540d00446c34c33482e"} Jan 26 09:58:17 crc kubenswrapper[4827]: I0126 09:58:17.703554 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:58:17 crc kubenswrapper[4827]: E0126 09:58:17.704397 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:58:30 crc kubenswrapper[4827]: I0126 09:58:30.703610 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:58:30 crc kubenswrapper[4827]: E0126 09:58:30.704562 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:58:41 crc kubenswrapper[4827]: I0126 09:58:41.719319 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:58:41 crc kubenswrapper[4827]: E0126 09:58:41.720702 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:58:55 crc kubenswrapper[4827]: I0126 09:58:55.705414 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:58:55 crc kubenswrapper[4827]: E0126 09:58:55.706240 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:59:06 crc kubenswrapper[4827]: I0126 09:59:06.703376 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:59:06 crc kubenswrapper[4827]: E0126 09:59:06.704214 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:59:20 crc kubenswrapper[4827]: I0126 09:59:20.704186 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:59:20 crc kubenswrapper[4827]: E0126 09:59:20.704992 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:59:35 crc kubenswrapper[4827]: I0126 09:59:35.703237 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:59:35 crc kubenswrapper[4827]: E0126 09:59:35.704111 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 09:59:50 crc kubenswrapper[4827]: I0126 09:59:50.702982 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 09:59:50 crc kubenswrapper[4827]: E0126 09:59:50.703986 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.141803 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" podStartSLOduration=106.735760447 podStartE2EDuration="1m47.141777431s" podCreationTimestamp="2026-01-26 09:58:13 +0000 UTC" firstStartedPulling="2026-01-26 09:58:14.70438105 +0000 UTC m=+3123.353052869" lastFinishedPulling="2026-01-26 09:58:15.110398024 +0000 UTC m=+3123.759069853" observedRunningTime="2026-01-26 09:58:15.680082184 +0000 UTC m=+3124.328754023" watchObservedRunningTime="2026-01-26 10:00:00.141777431 +0000 UTC m=+3228.790449250" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.143866 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h"] Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.145289 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.147503 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.147946 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.175476 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h"] Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.286354 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.286778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.286899 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfkmc\" (UniqueName: \"kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.388458 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.388540 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfkmc\" (UniqueName: \"kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.388603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.389431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.401830 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.422322 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfkmc\" (UniqueName: \"kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc\") pod \"collect-profiles-29490360-56m7h\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:00 crc kubenswrapper[4827]: I0126 10:00:00.489701 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:01 crc kubenswrapper[4827]: I0126 10:00:00.997424 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h"] Jan 26 10:00:01 crc kubenswrapper[4827]: I0126 10:00:01.758758 4827 generic.go:334] "Generic (PLEG): container finished" podID="2d73357b-289f-45d6-9561-bdad4323d941" containerID="4c4ec38ad25d03587586f282fb585da23d4d1901b3722982f8d3a4c6fe7f2192" exitCode=0 Jan 26 10:00:01 crc kubenswrapper[4827]: I0126 10:00:01.758857 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" event={"ID":"2d73357b-289f-45d6-9561-bdad4323d941","Type":"ContainerDied","Data":"4c4ec38ad25d03587586f282fb585da23d4d1901b3722982f8d3a4c6fe7f2192"} Jan 26 10:00:01 crc kubenswrapper[4827]: I0126 10:00:01.759063 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" event={"ID":"2d73357b-289f-45d6-9561-bdad4323d941","Type":"ContainerStarted","Data":"76a8fad6cac9095fff73eeaf7bcba2d9ca235ef7b0d4a30c2b2daff191594a13"} Jan 26 10:00:02 crc kubenswrapper[4827]: I0126 10:00:02.703162 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 10:00:02 crc kubenswrapper[4827]: E0126 10:00:02.703777 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.144093 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.238830 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfkmc\" (UniqueName: \"kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc\") pod \"2d73357b-289f-45d6-9561-bdad4323d941\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.238912 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume\") pod \"2d73357b-289f-45d6-9561-bdad4323d941\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.239025 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume\") pod \"2d73357b-289f-45d6-9561-bdad4323d941\" (UID: \"2d73357b-289f-45d6-9561-bdad4323d941\") " Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.239660 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d73357b-289f-45d6-9561-bdad4323d941" (UID: "2d73357b-289f-45d6-9561-bdad4323d941"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.239739 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d73357b-289f-45d6-9561-bdad4323d941-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.245794 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d73357b-289f-45d6-9561-bdad4323d941" (UID: "2d73357b-289f-45d6-9561-bdad4323d941"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.253780 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc" (OuterVolumeSpecName: "kube-api-access-jfkmc") pod "2d73357b-289f-45d6-9561-bdad4323d941" (UID: "2d73357b-289f-45d6-9561-bdad4323d941"). InnerVolumeSpecName "kube-api-access-jfkmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.341085 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfkmc\" (UniqueName: \"kubernetes.io/projected/2d73357b-289f-45d6-9561-bdad4323d941-kube-api-access-jfkmc\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.341123 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d73357b-289f-45d6-9561-bdad4323d941-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.775207 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" event={"ID":"2d73357b-289f-45d6-9561-bdad4323d941","Type":"ContainerDied","Data":"76a8fad6cac9095fff73eeaf7bcba2d9ca235ef7b0d4a30c2b2daff191594a13"} Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.775549 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a8fad6cac9095fff73eeaf7bcba2d9ca235ef7b0d4a30c2b2daff191594a13" Jan 26 10:00:03 crc kubenswrapper[4827]: I0126 10:00:03.775402 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490360-56m7h" Jan 26 10:00:04 crc kubenswrapper[4827]: I0126 10:00:04.236558 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf"] Jan 26 10:00:04 crc kubenswrapper[4827]: I0126 10:00:04.242816 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-mx6jf"] Jan 26 10:00:05 crc kubenswrapper[4827]: I0126 10:00:05.717197 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971c12d9-feea-474a-b3ad-58abe6658989" path="/var/lib/kubelet/pods/971c12d9-feea-474a-b3ad-58abe6658989/volumes" Jan 26 10:00:15 crc kubenswrapper[4827]: I0126 10:00:15.702872 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 10:00:16 crc kubenswrapper[4827]: I0126 10:00:16.885336 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12"} Jan 26 10:00:18 crc kubenswrapper[4827]: I0126 10:00:18.541156 4827 scope.go:117] "RemoveContainer" containerID="d1c0efa28af5c33ab132193bfb06c4d4e49c2c555a10d617a752996594dabc59" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.782058 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:23 crc kubenswrapper[4827]: E0126 10:00:23.783035 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d73357b-289f-45d6-9561-bdad4323d941" containerName="collect-profiles" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.783048 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d73357b-289f-45d6-9561-bdad4323d941" containerName="collect-profiles" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.783219 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d73357b-289f-45d6-9561-bdad4323d941" containerName="collect-profiles" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.785953 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.803440 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.817318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.817990 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.818114 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdm8p\" (UniqueName: \"kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.919723 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.919813 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdm8p\" (UniqueName: \"kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.920013 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.920128 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.920595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:23 crc kubenswrapper[4827]: I0126 10:00:23.951801 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdm8p\" (UniqueName: \"kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p\") pod \"redhat-operators-5dppz\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:24 crc kubenswrapper[4827]: I0126 10:00:24.116243 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:24 crc kubenswrapper[4827]: I0126 10:00:24.688642 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:24 crc kubenswrapper[4827]: I0126 10:00:24.947614 4827 generic.go:334] "Generic (PLEG): container finished" podID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerID="b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8" exitCode=0 Jan 26 10:00:24 crc kubenswrapper[4827]: I0126 10:00:24.947681 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerDied","Data":"b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8"} Jan 26 10:00:24 crc kubenswrapper[4827]: I0126 10:00:24.947931 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerStarted","Data":"4c351b4b3d692c6895e13eac0bffedbf36431e66882607fa554f6d3e06fa0d14"} Jan 26 10:00:26 crc kubenswrapper[4827]: I0126 10:00:26.969832 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerStarted","Data":"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6"} Jan 26 10:00:27 crc kubenswrapper[4827]: I0126 10:00:27.980514 4827 generic.go:334] "Generic (PLEG): container finished" podID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerID="b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6" exitCode=0 Jan 26 10:00:27 crc kubenswrapper[4827]: I0126 10:00:27.980598 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerDied","Data":"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6"} Jan 26 10:00:30 crc kubenswrapper[4827]: I0126 10:00:30.025200 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerStarted","Data":"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8"} Jan 26 10:00:30 crc kubenswrapper[4827]: I0126 10:00:30.047007 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dppz" podStartSLOduration=2.7639383239999997 podStartE2EDuration="7.046989054s" podCreationTimestamp="2026-01-26 10:00:23 +0000 UTC" firstStartedPulling="2026-01-26 10:00:24.949257152 +0000 UTC m=+3253.597928961" lastFinishedPulling="2026-01-26 10:00:29.232307872 +0000 UTC m=+3257.880979691" observedRunningTime="2026-01-26 10:00:30.044578707 +0000 UTC m=+3258.693250526" watchObservedRunningTime="2026-01-26 10:00:30.046989054 +0000 UTC m=+3258.695660873" Jan 26 10:00:34 crc kubenswrapper[4827]: I0126 10:00:34.116891 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:34 crc kubenswrapper[4827]: I0126 10:00:34.130840 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:35 crc kubenswrapper[4827]: I0126 10:00:35.188903 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5dppz" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="registry-server" probeResult="failure" output=< Jan 26 10:00:35 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:00:35 crc kubenswrapper[4827]: > Jan 26 10:00:44 crc kubenswrapper[4827]: I0126 10:00:44.163777 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:44 crc kubenswrapper[4827]: I0126 10:00:44.237014 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:44 crc kubenswrapper[4827]: I0126 10:00:44.409658 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.158745 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dppz" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="registry-server" containerID="cri-o://86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8" gracePeriod=2 Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.675372 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.838534 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdm8p\" (UniqueName: \"kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p\") pod \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.838648 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities\") pod \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.838684 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content\") pod \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\" (UID: \"fa21ce66-dedb-45d3-b5c8-9355d9f39e25\") " Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.839454 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities" (OuterVolumeSpecName: "utilities") pod "fa21ce66-dedb-45d3-b5c8-9355d9f39e25" (UID: "fa21ce66-dedb-45d3-b5c8-9355d9f39e25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.843824 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p" (OuterVolumeSpecName: "kube-api-access-wdm8p") pod "fa21ce66-dedb-45d3-b5c8-9355d9f39e25" (UID: "fa21ce66-dedb-45d3-b5c8-9355d9f39e25"). InnerVolumeSpecName "kube-api-access-wdm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.940921 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdm8p\" (UniqueName: \"kubernetes.io/projected/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-kube-api-access-wdm8p\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.940958 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:46 crc kubenswrapper[4827]: I0126 10:00:46.994245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa21ce66-dedb-45d3-b5c8-9355d9f39e25" (UID: "fa21ce66-dedb-45d3-b5c8-9355d9f39e25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.042036 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa21ce66-dedb-45d3-b5c8-9355d9f39e25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.214562 4827 generic.go:334] "Generic (PLEG): container finished" podID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerID="86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8" exitCode=0 Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.214923 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerDied","Data":"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8"} Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.215010 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dppz" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.215024 4827 scope.go:117] "RemoveContainer" containerID="86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.214994 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dppz" event={"ID":"fa21ce66-dedb-45d3-b5c8-9355d9f39e25","Type":"ContainerDied","Data":"4c351b4b3d692c6895e13eac0bffedbf36431e66882607fa554f6d3e06fa0d14"} Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.238723 4827 scope.go:117] "RemoveContainer" containerID="b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.264624 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.271461 4827 scope.go:117] "RemoveContainer" containerID="b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.272988 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dppz"] Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.310316 4827 scope.go:117] "RemoveContainer" containerID="86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8" Jan 26 10:00:47 crc kubenswrapper[4827]: E0126 10:00:47.310849 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8\": container with ID starting with 86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8 not found: ID does not exist" containerID="86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.310921 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8"} err="failed to get container status \"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8\": rpc error: code = NotFound desc = could not find container \"86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8\": container with ID starting with 86e5d6f4ccf632243af6e77c5e3967dddbd0de33c0c7c05f3443471ad6dcbac8 not found: ID does not exist" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.310941 4827 scope.go:117] "RemoveContainer" containerID="b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6" Jan 26 10:00:47 crc kubenswrapper[4827]: E0126 10:00:47.311460 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6\": container with ID starting with b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6 not found: ID does not exist" containerID="b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.311483 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6"} err="failed to get container status \"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6\": rpc error: code = NotFound desc = could not find container \"b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6\": container with ID starting with b794de5af8cf131f7a563602f163887988e2b357deaf6ab00ecc49a98dd3a3c6 not found: ID does not exist" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.311497 4827 scope.go:117] "RemoveContainer" containerID="b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8" Jan 26 10:00:47 crc kubenswrapper[4827]: E0126 10:00:47.311907 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8\": container with ID starting with b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8 not found: ID does not exist" containerID="b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.311931 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8"} err="failed to get container status \"b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8\": rpc error: code = NotFound desc = could not find container \"b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8\": container with ID starting with b804b0b364c2f90324cf024419fe25a75168df54592bd17b1264576233e14dd8 not found: ID does not exist" Jan 26 10:00:47 crc kubenswrapper[4827]: I0126 10:00:47.714865 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" path="/var/lib/kubelet/pods/fa21ce66-dedb-45d3-b5c8-9355d9f39e25/volumes" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.168610 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490361-2sb62"] Jan 26 10:01:00 crc kubenswrapper[4827]: E0126 10:01:00.169705 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="registry-server" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.169722 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="registry-server" Jan 26 10:01:00 crc kubenswrapper[4827]: E0126 10:01:00.169734 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="extract-content" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.169742 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="extract-content" Jan 26 10:01:00 crc kubenswrapper[4827]: E0126 10:01:00.169759 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="extract-utilities" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.169777 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="extract-utilities" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.170002 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa21ce66-dedb-45d3-b5c8-9355d9f39e25" containerName="registry-server" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.170954 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.183096 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490361-2sb62"] Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.307978 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.308138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7rc\" (UniqueName: \"kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.308185 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.308218 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.410705 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.410822 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7rc\" (UniqueName: \"kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.410855 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.410883 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.417762 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.422702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.425561 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.444536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7rc\" (UniqueName: \"kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc\") pod \"keystone-cron-29490361-2sb62\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.492581 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:00 crc kubenswrapper[4827]: I0126 10:01:00.961348 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490361-2sb62"] Jan 26 10:01:01 crc kubenswrapper[4827]: I0126 10:01:01.338679 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490361-2sb62" event={"ID":"5f315ab7-c066-4333-93bb-1e479301743a","Type":"ContainerStarted","Data":"80f1957ec1c9506d3c4e27882ec74e91058f82b95964e01312ce22ec208f30cf"} Jan 26 10:01:01 crc kubenswrapper[4827]: I0126 10:01:01.340433 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490361-2sb62" event={"ID":"5f315ab7-c066-4333-93bb-1e479301743a","Type":"ContainerStarted","Data":"a46530afb47aa5adceb232a76050261cbbf4feafb3c5b5ee71db849da686c369"} Jan 26 10:01:01 crc kubenswrapper[4827]: I0126 10:01:01.356916 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490361-2sb62" podStartSLOduration=1.356888519 podStartE2EDuration="1.356888519s" podCreationTimestamp="2026-01-26 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:01.353149925 +0000 UTC m=+3290.001821754" watchObservedRunningTime="2026-01-26 10:01:01.356888519 +0000 UTC m=+3290.005560338" Jan 26 10:01:04 crc kubenswrapper[4827]: I0126 10:01:04.368338 4827 generic.go:334] "Generic (PLEG): container finished" podID="5f315ab7-c066-4333-93bb-1e479301743a" containerID="80f1957ec1c9506d3c4e27882ec74e91058f82b95964e01312ce22ec208f30cf" exitCode=0 Jan 26 10:01:04 crc kubenswrapper[4827]: I0126 10:01:04.368425 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490361-2sb62" event={"ID":"5f315ab7-c066-4333-93bb-1e479301743a","Type":"ContainerDied","Data":"80f1957ec1c9506d3c4e27882ec74e91058f82b95964e01312ce22ec208f30cf"} Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.727814 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.821162 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys\") pod \"5f315ab7-c066-4333-93bb-1e479301743a\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.821422 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle\") pod \"5f315ab7-c066-4333-93bb-1e479301743a\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.821469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data\") pod \"5f315ab7-c066-4333-93bb-1e479301743a\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.821498 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7rc\" (UniqueName: \"kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc\") pod \"5f315ab7-c066-4333-93bb-1e479301743a\" (UID: \"5f315ab7-c066-4333-93bb-1e479301743a\") " Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.828634 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5f315ab7-c066-4333-93bb-1e479301743a" (UID: "5f315ab7-c066-4333-93bb-1e479301743a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.836567 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc" (OuterVolumeSpecName: "kube-api-access-8r7rc") pod "5f315ab7-c066-4333-93bb-1e479301743a" (UID: "5f315ab7-c066-4333-93bb-1e479301743a"). InnerVolumeSpecName "kube-api-access-8r7rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.860809 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f315ab7-c066-4333-93bb-1e479301743a" (UID: "5f315ab7-c066-4333-93bb-1e479301743a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.879802 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data" (OuterVolumeSpecName: "config-data") pod "5f315ab7-c066-4333-93bb-1e479301743a" (UID: "5f315ab7-c066-4333-93bb-1e479301743a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.923303 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.923338 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.923351 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7rc\" (UniqueName: \"kubernetes.io/projected/5f315ab7-c066-4333-93bb-1e479301743a-kube-api-access-8r7rc\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:05 crc kubenswrapper[4827]: I0126 10:01:05.923363 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f315ab7-c066-4333-93bb-1e479301743a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:06 crc kubenswrapper[4827]: I0126 10:01:06.391393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490361-2sb62" event={"ID":"5f315ab7-c066-4333-93bb-1e479301743a","Type":"ContainerDied","Data":"a46530afb47aa5adceb232a76050261cbbf4feafb3c5b5ee71db849da686c369"} Jan 26 10:01:06 crc kubenswrapper[4827]: I0126 10:01:06.391770 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46530afb47aa5adceb232a76050261cbbf4feafb3c5b5ee71db849da686c369" Jan 26 10:01:06 crc kubenswrapper[4827]: I0126 10:01:06.391468 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490361-2sb62" Jan 26 10:01:25 crc kubenswrapper[4827]: I0126 10:01:25.592405 4827 generic.go:334] "Generic (PLEG): container finished" podID="62a102b4-e915-4a42-a644-91624460cb06" containerID="430e848d95ffe0cd4b9c6e08e844c58f64c68f23312296143cdd2995918ab5e1" exitCode=0 Jan 26 10:01:25 crc kubenswrapper[4827]: I0126 10:01:25.592501 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" event={"ID":"62a102b4-e915-4a42-a644-91624460cb06","Type":"ContainerDied","Data":"430e848d95ffe0cd4b9c6e08e844c58f64c68f23312296143cdd2995918ab5e1"} Jan 26 10:01:26 crc kubenswrapper[4827]: I0126 10:01:26.973391 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025255 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025310 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025359 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025414 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025495 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.025552 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.026158 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.026440 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbpd\" (UniqueName: \"kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.026465 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.026510 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.026561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0\") pod \"62a102b4-e915-4a42-a644-91624460cb06\" (UID: \"62a102b4-e915-4a42-a644-91624460cb06\") " Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.030592 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd" (OuterVolumeSpecName: "kube-api-access-fdbpd") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "kube-api-access-fdbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.037449 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbpd\" (UniqueName: \"kubernetes.io/projected/62a102b4-e915-4a42-a644-91624460cb06-kube-api-access-fdbpd\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.048792 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.050678 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph" (OuterVolumeSpecName: "ceph") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.052247 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.056548 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.057996 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory" (OuterVolumeSpecName: "inventory") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.060404 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.061307 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.062400 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.077027 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.079332 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "62a102b4-e915-4a42-a644-91624460cb06" (UID: "62a102b4-e915-4a42-a644-91624460cb06"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.139663 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.139869 4827 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.139932 4827 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140046 4827 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140110 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140170 4827 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140235 4827 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140297 4827 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140355 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62a102b4-e915-4a42-a644-91624460cb06-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.140414 4827 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/62a102b4-e915-4a42-a644-91624460cb06-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.615928 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" event={"ID":"62a102b4-e915-4a42-a644-91624460cb06","Type":"ContainerDied","Data":"925c1b456564796428dfaa4190c953f00c475efb633c7540d00446c34c33482e"} Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.616022 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925c1b456564796428dfaa4190c953f00c475efb633c7540d00446c34c33482e" Jan 26 10:01:27 crc kubenswrapper[4827]: I0126 10:01:27.616079 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.347727 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 10:01:42 crc kubenswrapper[4827]: E0126 10:01:42.348358 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f315ab7-c066-4333-93bb-1e479301743a" containerName="keystone-cron" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.348369 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f315ab7-c066-4333-93bb-1e479301743a" containerName="keystone-cron" Jan 26 10:01:42 crc kubenswrapper[4827]: E0126 10:01:42.348387 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a102b4-e915-4a42-a644-91624460cb06" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.348395 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a102b4-e915-4a42-a644-91624460cb06" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.348552 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a102b4-e915-4a42-a644-91624460cb06" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.348564 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f315ab7-c066-4333-93bb-1e479301743a" containerName="keystone-cron" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.349405 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.353291 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.353378 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.385233 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433130 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433488 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433578 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433719 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433816 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433913 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.433993 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434086 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434179 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434265 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld658\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-kube-api-access-ld658\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434339 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434420 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-run\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434506 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434575 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.434683 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536003 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536052 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld658\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-kube-api-access-ld658\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536072 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536098 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-run\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536124 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536137 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536158 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536184 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536203 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536222 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536249 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536264 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536289 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536324 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536343 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536376 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536453 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.536980 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-run\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.537025 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.537095 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.537331 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.537496 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.537826 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.538018 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.538446 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.538600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.541962 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.542166 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.542795 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.557942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.558331 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld658\" (UniqueName: \"kubernetes.io/projected/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-kube-api-access-ld658\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.559397 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4313f2bb-7f66-41c6-9c0e-87ae0d9eea08-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08\") " pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:42 crc kubenswrapper[4827]: I0126 10:01:42.984085 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.094324 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.104324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.108982 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.112204 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.203979 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204029 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204055 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-lib-modules\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204075 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204095 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-sys\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204112 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pn79\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-kube-api-access-2pn79\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204140 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204173 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204205 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-dev\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204224 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-ceph\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204258 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-scripts\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204281 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204297 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204314 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.204333 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-run\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.230974 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.236736 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.242428 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ksktd" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.242831 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.242934 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.242953 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.296876 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305417 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305453 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-dev\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305475 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305542 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-dev\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305580 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-ceph\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305598 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-scripts\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.305584 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306583 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306612 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306696 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306744 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-run\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306780 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306859 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-lib-modules\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306878 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-sys\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306921 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn79\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-kube-api-access-2pn79\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.306969 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307492 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307587 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-sys\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307874 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307907 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-lib-modules\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307930 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.307954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/98911844-c24c-42e7-bf54-ca3cfb5d77c5-run\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.313278 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-ceph\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.334074 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.335060 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.335759 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-config-data\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.356910 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn79\" (UniqueName: \"kubernetes.io/projected/98911844-c24c-42e7-bf54-ca3cfb5d77c5-kube-api-access-2pn79\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.363164 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.364457 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.366447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98911844-c24c-42e7-bf54-ca3cfb5d77c5-scripts\") pod \"cinder-backup-0\" (UID: \"98911844-c24c-42e7-bf54-ca3cfb5d77c5\") " pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.379894 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.380516 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.381665 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408581 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408629 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408699 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408725 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wnh\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408747 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408783 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408818 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.408833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.432346 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.510745 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.510787 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.510807 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.510833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.510856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511272 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wnh\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511306 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511325 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511342 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511360 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmqv\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511378 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511419 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511434 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511463 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511503 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511539 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511569 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.511585 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.512114 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.513170 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.513398 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.515131 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.517540 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.526978 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.527702 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.528167 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.542317 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wnh\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.591617 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-jnl6t"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.592722 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616051 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616118 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616148 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616182 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616200 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616260 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616292 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.616309 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmqv\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.617062 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.623491 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.623583 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jnl6t"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.624535 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.629349 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.629581 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.635311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.635765 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.646500 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.669947 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.679807 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmqv\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.696346 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-c5f9-account-create-update-mmbpl"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.697551 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.712649 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.725562 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.725727 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c5x\" (UniqueName: \"kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.748367 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.761395 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-c5f9-account-create-update-mmbpl"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.761429 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.801898 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.808873 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.809169 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fzk74" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.813419 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.813869 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.824997 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.846023 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.846281 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7c5x\" (UniqueName: \"kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.846451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.846490 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxbn\" (UniqueName: \"kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.851920 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.886173 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.900746 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.901255 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7c5x\" (UniqueName: \"kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x\") pod \"manila-db-create-jnl6t\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.972817 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.984545 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.984591 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxbn\" (UniqueName: \"kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.984618 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.984696 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.984759 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.985519 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.985613 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.986142 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlphq\" (UniqueName: \"kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:43 crc kubenswrapper[4827]: I0126 10:01:43.986706 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:43.999975 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.000701 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.039837 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxbn\" (UniqueName: \"kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn\") pod \"manila-c5f9-account-create-update-mmbpl\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.061020 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089075 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089198 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfsn\" (UniqueName: \"kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089250 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089294 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089443 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089514 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlphq\" (UniqueName: \"kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089550 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089619 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.089756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.090183 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.090255 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.093195 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.094447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.091107 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.108390 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.112788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlphq\" (UniqueName: \"kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq\") pod \"horizon-747bb7697c-vkxjn\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.119363 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.165934 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.195356 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.195412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msfsn\" (UniqueName: \"kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.195455 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.195522 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.195553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.196275 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.197407 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.198521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.200962 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.214357 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msfsn\" (UniqueName: \"kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn\") pod \"horizon-797576d449-dcqq6\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.321361 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.583062 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.731067 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:44 crc kubenswrapper[4827]: I0126 10:01:44.798986 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jnl6t"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.049590 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerStarted","Data":"4e434484ac4c220117d6b4e136ba35227cffed8b74b006f49938d8c0c3dc29d3"} Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.061754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"98911844-c24c-42e7-bf54-ca3cfb5d77c5","Type":"ContainerStarted","Data":"b8ed1d778f6892f296b906c872df15d464f3c99e0b54061361fe2290065dd66d"} Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.083257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08","Type":"ContainerStarted","Data":"3f2c7575b7e6c3408906eee599cf508e39f6018a82bffddda2f173660d602f29"} Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.095470 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnl6t" event={"ID":"079e6508-8d31-439e-bc59-8fededfa8371","Type":"ContainerStarted","Data":"39dbc8049342ed892a4170954c65b89b3aeeab465749ca9e0d431b2f7c631648"} Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.112822 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.174010 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-c5f9-account-create-update-mmbpl"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.197693 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.209686 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.853930 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.854335 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.855610 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.855711 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.873559 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.882887 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.945908 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958696 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8vf\" (UniqueName: \"kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958773 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958843 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.958938 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.976151 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8d4867b4-j5kkp"] Jan 26 10:01:45 crc kubenswrapper[4827]: I0126 10:01:45.979009 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.005475 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8d4867b4-j5kkp"] Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065016 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065074 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-config-data\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065099 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qcv\" (UniqueName: \"kubernetes.io/projected/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-kube-api-access-c2qcv\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065137 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-tls-certs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065179 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-secret-key\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065214 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-logs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065303 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd8vf\" (UniqueName: \"kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065324 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065346 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065367 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065397 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-combined-ca-bundle\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065428 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-scripts\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.065470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.066199 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.070821 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.075497 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.107579 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.118668 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.121838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.155558 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c5f9-account-create-update-mmbpl" event={"ID":"660be36b-a175-4e62-a15e-ddb67cb009cb","Type":"ContainerStarted","Data":"b017b06a7992110fe3719589c88d8b7ac3f8a9c2c668797a6bc2a63e75474818"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.157458 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c5f9-account-create-update-mmbpl" event={"ID":"660be36b-a175-4e62-a15e-ddb67cb009cb","Type":"ContainerStarted","Data":"4b3d374a8acb52e71b8e015dc3d9d87e53dc39d0ecf44e29fd6d0d2c8ab3ffa1"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.169790 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-combined-ca-bundle\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170081 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-scripts\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170177 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2qcv\" (UniqueName: \"kubernetes.io/projected/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-kube-api-access-c2qcv\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170221 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-config-data\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170254 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-tls-certs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-secret-key\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.170908 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-logs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.176854 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-logs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.178196 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-config-data\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.181951 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd8vf\" (UniqueName: \"kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf\") pod \"horizon-5dfbfd7c96-88kv4\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.182023 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-scripts\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.188625 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-c5f9-account-create-update-mmbpl" podStartSLOduration=3.188585866 podStartE2EDuration="3.188585866s" podCreationTimestamp="2026-01-26 10:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:46.182510929 +0000 UTC m=+3334.831182748" watchObservedRunningTime="2026-01-26 10:01:46.188585866 +0000 UTC m=+3334.837257685" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.196914 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-tls-certs\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.197260 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.199617 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-horizon-secret-key\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.201327 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2qcv\" (UniqueName: \"kubernetes.io/projected/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-kube-api-access-c2qcv\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.204820 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08","Type":"ContainerStarted","Data":"3fcf0343ace08976d8ae828b509d3621d8be8593895579cef37723fdffb3fd7c"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.208490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d-combined-ca-bundle\") pod \"horizon-8d4867b4-j5kkp\" (UID: \"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d\") " pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.209155 4827 generic.go:334] "Generic (PLEG): container finished" podID="079e6508-8d31-439e-bc59-8fededfa8371" containerID="a4193fd343ff67f6875461b53393c1003031936705d94b183106411e8ffa2544" exitCode=0 Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.209256 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnl6t" event={"ID":"079e6508-8d31-439e-bc59-8fededfa8371","Type":"ContainerDied","Data":"a4193fd343ff67f6875461b53393c1003031936705d94b183106411e8ffa2544"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.211700 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747bb7697c-vkxjn" event={"ID":"aed59d22-b784-469b-b8f0-a2ccdc1cc096","Type":"ContainerStarted","Data":"17d0aef3f8b5d8d01de0de1fd76a777f7709052ae5094e56d413713c0d47ea16"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.214021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerStarted","Data":"3d5dc3608dc79ce732fbfd86d44c9b2ea4757fe23b9f9bcc14f05614cd90e71e"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.239791 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797576d449-dcqq6" event={"ID":"1ab57369-6c16-42b2-b765-b0e18fb182ac","Type":"ContainerStarted","Data":"2f9154745f4107f58b861530f436864cb03b545e867e434e7f2ec0458487f969"} Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.362324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:01:46 crc kubenswrapper[4827]: I0126 10:01:46.842345 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.033682 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8d4867b4-j5kkp"] Jan 26 10:01:47 crc kubenswrapper[4827]: W0126 10:01:47.058273 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4019ef6d_d9bb_4e2c_ad8a_d51a0ebbdb2d.slice/crio-6aa04d8f8413a6a76879dd33643d4f10310b1bcfd998dbd7f2a6c6d50f9c6267 WatchSource:0}: Error finding container 6aa04d8f8413a6a76879dd33643d4f10310b1bcfd998dbd7f2a6c6d50f9c6267: Status 404 returned error can't find the container with id 6aa04d8f8413a6a76879dd33643d4f10310b1bcfd998dbd7f2a6c6d50f9c6267 Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.256877 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4313f2bb-7f66-41c6-9c0e-87ae0d9eea08","Type":"ContainerStarted","Data":"2226ee56ff8e9dfa37a07514ec92448978590d13ff56d77aa21d13c38fa62da9"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.265359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerStarted","Data":"823c50bbb3adc422718f90c3caac3c74dcb7661a58dfae51b26098f0105d8495"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.266870 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8d4867b4-j5kkp" event={"ID":"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d","Type":"ContainerStarted","Data":"6aa04d8f8413a6a76879dd33643d4f10310b1bcfd998dbd7f2a6c6d50f9c6267"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.269512 4827 generic.go:334] "Generic (PLEG): container finished" podID="660be36b-a175-4e62-a15e-ddb67cb009cb" containerID="b017b06a7992110fe3719589c88d8b7ac3f8a9c2c668797a6bc2a63e75474818" exitCode=0 Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.269562 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c5f9-account-create-update-mmbpl" event={"ID":"660be36b-a175-4e62-a15e-ddb67cb009cb","Type":"ContainerDied","Data":"b017b06a7992110fe3719589c88d8b7ac3f8a9c2c668797a6bc2a63e75474818"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.272671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerStarted","Data":"9b6072dafba467ccc5ffd43b5856090b706822b820dddd244a8683a144da7cd1"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.290259 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.048313086 podStartE2EDuration="5.290239016s" podCreationTimestamp="2026-01-26 10:01:42 +0000 UTC" firstStartedPulling="2026-01-26 10:01:44.073223006 +0000 UTC m=+3332.721894825" lastFinishedPulling="2026-01-26 10:01:45.315148936 +0000 UTC m=+3333.963820755" observedRunningTime="2026-01-26 10:01:47.28130807 +0000 UTC m=+3335.929979889" watchObservedRunningTime="2026-01-26 10:01:47.290239016 +0000 UTC m=+3335.938910835" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.290706 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerStarted","Data":"db13fbb196a35bb88870f151a067ba15645e536c264a843d71b9cc2809970bb6"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.305566 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"98911844-c24c-42e7-bf54-ca3cfb5d77c5","Type":"ContainerStarted","Data":"6dc57138ede609b8622592a65aaaefac6eea571e8b63aef61c39fed2abe4b1f6"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.305622 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"98911844-c24c-42e7-bf54-ca3cfb5d77c5","Type":"ContainerStarted","Data":"c54d1ed13ac3d70b98cc46b0b784a1ff4e862b65aa84f66e340e2bea717eed4d"} Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.340371 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.418258257 podStartE2EDuration="4.340350885s" podCreationTimestamp="2026-01-26 10:01:43 +0000 UTC" firstStartedPulling="2026-01-26 10:01:44.654796193 +0000 UTC m=+3333.303468012" lastFinishedPulling="2026-01-26 10:01:45.576888821 +0000 UTC m=+3334.225560640" observedRunningTime="2026-01-26 10:01:47.329243759 +0000 UTC m=+3335.977915578" watchObservedRunningTime="2026-01-26 10:01:47.340350885 +0000 UTC m=+3335.989022704" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.820035 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.827501 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts\") pod \"079e6508-8d31-439e-bc59-8fededfa8371\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.827660 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7c5x\" (UniqueName: \"kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x\") pod \"079e6508-8d31-439e-bc59-8fededfa8371\" (UID: \"079e6508-8d31-439e-bc59-8fededfa8371\") " Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.829142 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "079e6508-8d31-439e-bc59-8fededfa8371" (UID: "079e6508-8d31-439e-bc59-8fededfa8371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.846854 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x" (OuterVolumeSpecName: "kube-api-access-b7c5x") pod "079e6508-8d31-439e-bc59-8fededfa8371" (UID: "079e6508-8d31-439e-bc59-8fededfa8371"). InnerVolumeSpecName "kube-api-access-b7c5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.930023 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/079e6508-8d31-439e-bc59-8fededfa8371-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.930065 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7c5x\" (UniqueName: \"kubernetes.io/projected/079e6508-8d31-439e-bc59-8fededfa8371-kube-api-access-b7c5x\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:47 crc kubenswrapper[4827]: I0126 10:01:47.992688 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.335848 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerStarted","Data":"e8de9be099c44772e3b766afc2d03f64019329fc156119a75c75bf2f1145ac4a"} Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.336240 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-log" containerID="cri-o://823c50bbb3adc422718f90c3caac3c74dcb7661a58dfae51b26098f0105d8495" gracePeriod=30 Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.336752 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-httpd" containerID="cri-o://e8de9be099c44772e3b766afc2d03f64019329fc156119a75c75bf2f1145ac4a" gracePeriod=30 Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.350463 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerStarted","Data":"450e63999867e342bea729f27aee6d89db8703612f4dd5a7f1426975b85d85c6"} Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.350679 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-log" containerID="cri-o://db13fbb196a35bb88870f151a067ba15645e536c264a843d71b9cc2809970bb6" gracePeriod=30 Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.350775 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-httpd" containerID="cri-o://450e63999867e342bea729f27aee6d89db8703612f4dd5a7f1426975b85d85c6" gracePeriod=30 Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.367970 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnl6t" Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.372232 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnl6t" event={"ID":"079e6508-8d31-439e-bc59-8fededfa8371","Type":"ContainerDied","Data":"39dbc8049342ed892a4170954c65b89b3aeeab465749ca9e0d431b2f7c631648"} Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.372271 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39dbc8049342ed892a4170954c65b89b3aeeab465749ca9e0d431b2f7c631648" Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.415466 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.415391403 podStartE2EDuration="6.415391403s" podCreationTimestamp="2026-01-26 10:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:48.393950452 +0000 UTC m=+3337.042622271" watchObservedRunningTime="2026-01-26 10:01:48.415391403 +0000 UTC m=+3337.064063232" Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.434023 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 10:01:48 crc kubenswrapper[4827]: I0126 10:01:48.439307 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.439287151 podStartE2EDuration="6.439287151s" podCreationTimestamp="2026-01-26 10:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:48.42476291 +0000 UTC m=+3337.073434729" watchObservedRunningTime="2026-01-26 10:01:48.439287151 +0000 UTC m=+3337.087958970" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.066359 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.075646 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxbn\" (UniqueName: \"kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn\") pod \"660be36b-a175-4e62-a15e-ddb67cb009cb\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.075718 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts\") pod \"660be36b-a175-4e62-a15e-ddb67cb009cb\" (UID: \"660be36b-a175-4e62-a15e-ddb67cb009cb\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.076739 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "660be36b-a175-4e62-a15e-ddb67cb009cb" (UID: "660be36b-a175-4e62-a15e-ddb67cb009cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.125823 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn" (OuterVolumeSpecName: "kube-api-access-vxxbn") pod "660be36b-a175-4e62-a15e-ddb67cb009cb" (UID: "660be36b-a175-4e62-a15e-ddb67cb009cb"). InnerVolumeSpecName "kube-api-access-vxxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.178575 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/660be36b-a175-4e62-a15e-ddb67cb009cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.178604 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxxbn\" (UniqueName: \"kubernetes.io/projected/660be36b-a175-4e62-a15e-ddb67cb009cb-kube-api-access-vxxbn\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.401949 4827 generic.go:334] "Generic (PLEG): container finished" podID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerID="e8de9be099c44772e3b766afc2d03f64019329fc156119a75c75bf2f1145ac4a" exitCode=0 Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.401983 4827 generic.go:334] "Generic (PLEG): container finished" podID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerID="823c50bbb3adc422718f90c3caac3c74dcb7661a58dfae51b26098f0105d8495" exitCode=143 Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.402023 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerDied","Data":"e8de9be099c44772e3b766afc2d03f64019329fc156119a75c75bf2f1145ac4a"} Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.402199 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerDied","Data":"823c50bbb3adc422718f90c3caac3c74dcb7661a58dfae51b26098f0105d8495"} Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.407158 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c5f9-account-create-update-mmbpl" event={"ID":"660be36b-a175-4e62-a15e-ddb67cb009cb","Type":"ContainerDied","Data":"4b3d374a8acb52e71b8e015dc3d9d87e53dc39d0ecf44e29fd6d0d2c8ab3ffa1"} Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.407182 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3d374a8acb52e71b8e015dc3d9d87e53dc39d0ecf44e29fd6d0d2c8ab3ffa1" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.407247 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c5f9-account-create-update-mmbpl" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.442140 4827 generic.go:334] "Generic (PLEG): container finished" podID="043283c0-f10e-4327-922e-d5593e705611" containerID="450e63999867e342bea729f27aee6d89db8703612f4dd5a7f1426975b85d85c6" exitCode=0 Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.442169 4827 generic.go:334] "Generic (PLEG): container finished" podID="043283c0-f10e-4327-922e-d5593e705611" containerID="db13fbb196a35bb88870f151a067ba15645e536c264a843d71b9cc2809970bb6" exitCode=143 Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.442766 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerDied","Data":"450e63999867e342bea729f27aee6d89db8703612f4dd5a7f1426975b85d85c6"} Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.442801 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerDied","Data":"db13fbb196a35bb88870f151a067ba15645e536c264a843d71b9cc2809970bb6"} Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.628267 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.804697 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.804863 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmqv\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.804966 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.804988 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.805010 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.805029 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.805057 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.805085 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.805104 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts\") pod \"012399d3-a8d8-4465-8fa0-4346cc2d9233\" (UID: \"012399d3-a8d8-4465-8fa0-4346cc2d9233\") " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.814242 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.818629 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs" (OuterVolumeSpecName: "logs") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.830924 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.838610 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv" (OuterVolumeSpecName: "kube-api-access-rqmqv") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "kube-api-access-rqmqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.852092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts" (OuterVolumeSpecName: "scripts") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.875970 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph" (OuterVolumeSpecName: "ceph") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.886109 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908366 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmqv\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-kube-api-access-rqmqv\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908393 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908402 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012399d3-a8d8-4465-8fa0-4346cc2d9233-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908425 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908436 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012399d3-a8d8-4465-8fa0-4346cc2d9233-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908444 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.908451 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.929781 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data" (OuterVolumeSpecName: "config-data") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.935043 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 10:01:49 crc kubenswrapper[4827]: I0126 10:01:49.965398 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "012399d3-a8d8-4465-8fa0-4346cc2d9233" (UID: "012399d3-a8d8-4465-8fa0-4346cc2d9233"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.010752 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.010792 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.010806 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/012399d3-a8d8-4465-8fa0-4346cc2d9233-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.048144 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.214495 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.214557 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.214585 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.214666 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.214985 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs" (OuterVolumeSpecName: "logs") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.215994 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5wnh\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.216030 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.216054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.216077 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.216102 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run\") pod \"043283c0-f10e-4327-922e-d5593e705611\" (UID: \"043283c0-f10e-4327-922e-d5593e705611\") " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.216538 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.218332 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.219114 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh" (OuterVolumeSpecName: "kube-api-access-s5wnh") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "kube-api-access-s5wnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.219517 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts" (OuterVolumeSpecName: "scripts") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.219904 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.221036 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph" (OuterVolumeSpecName: "ceph") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.286042 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.286354 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data" (OuterVolumeSpecName: "config-data") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.300380 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "043283c0-f10e-4327-922e-d5593e705611" (UID: "043283c0-f10e-4327-922e-d5593e705611"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318517 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318544 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318553 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/043283c0-f10e-4327-922e-d5593e705611-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318563 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318594 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318604 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318613 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5wnh\" (UniqueName: \"kubernetes.io/projected/043283c0-f10e-4327-922e-d5593e705611-kube-api-access-s5wnh\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.318623 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/043283c0-f10e-4327-922e-d5593e705611-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.338394 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.419890 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.460302 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"043283c0-f10e-4327-922e-d5593e705611","Type":"ContainerDied","Data":"4e434484ac4c220117d6b4e136ba35227cffed8b74b006f49938d8c0c3dc29d3"} Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.460344 4827 scope.go:117] "RemoveContainer" containerID="450e63999867e342bea729f27aee6d89db8703612f4dd5a7f1426975b85d85c6" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.460525 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.472744 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"012399d3-a8d8-4465-8fa0-4346cc2d9233","Type":"ContainerDied","Data":"3d5dc3608dc79ce732fbfd86d44c9b2ea4757fe23b9f9bcc14f05614cd90e71e"} Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.472771 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.525786 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.546789 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.556748 4827 scope.go:117] "RemoveContainer" containerID="db13fbb196a35bb88870f151a067ba15645e536c264a843d71b9cc2809970bb6" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.556869 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557321 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e6508-8d31-439e-bc59-8fededfa8371" containerName="mariadb-database-create" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557332 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e6508-8d31-439e-bc59-8fededfa8371" containerName="mariadb-database-create" Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557346 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557352 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557367 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557373 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557383 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557389 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557397 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660be36b-a175-4e62-a15e-ddb67cb009cb" containerName="mariadb-account-create-update" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557402 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="660be36b-a175-4e62-a15e-ddb67cb009cb" containerName="mariadb-account-create-update" Jan 26 10:01:50 crc kubenswrapper[4827]: E0126 10:01:50.557419 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557424 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557603 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557617 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557629 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" containerName="glance-httpd" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557654 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="043283c0-f10e-4327-922e-d5593e705611" containerName="glance-log" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557668 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="660be36b-a175-4e62-a15e-ddb67cb009cb" containerName="mariadb-account-create-update" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.557678 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e6508-8d31-439e-bc59-8fededfa8371" containerName="mariadb-database-create" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.559463 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.568507 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.570584 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.570799 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ksktd" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.571919 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.572053 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.597457 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.620329 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.646411 4827 scope.go:117] "RemoveContainer" containerID="e8de9be099c44772e3b766afc2d03f64019329fc156119a75c75bf2f1145ac4a" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.647471 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.660464 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.662791 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.664408 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.671468 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.730848 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.730918 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731002 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731050 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731086 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731578 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8jk\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-kube-api-access-cv8jk\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731747 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-logs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.731805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-logs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8hk\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-kube-api-access-jj8hk\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732691 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732764 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732825 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732872 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732952 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.732998 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.733056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.733088 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.733184 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.733411 4827 scope.go:117] "RemoveContainer" containerID="823c50bbb3adc422718f90c3caac3c74dcb7661a58dfae51b26098f0105d8495" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835087 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-logs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835163 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-logs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835186 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8hk\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-kube-api-access-jj8hk\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835266 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835308 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835344 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835361 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835386 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835405 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835448 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835468 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835486 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835503 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835521 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835547 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835606 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv8jk\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-kube-api-access-cv8jk\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.835707 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-logs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.836013 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.836086 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.836264 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.836789 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-logs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.840741 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d29d9533-48d6-4314-8bab-835c6804dcd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.856582 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.857502 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.858725 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.859247 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.859483 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.860259 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.861005 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv8jk\" (UniqueName: \"kubernetes.io/projected/d29d9533-48d6-4314-8bab-835c6804dcd6-kube-api-access-cv8jk\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.867557 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.869250 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.869253 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.870179 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8hk\" (UniqueName: \"kubernetes.io/projected/792cbe2a-cbf2-48f0-8eac-3c3d5b91538a-kube-api-access-jj8hk\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.903261 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a\") " pod="openstack/glance-default-internal-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.911929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d29d9533-48d6-4314-8bab-835c6804dcd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:50 crc kubenswrapper[4827]: I0126 10:01:50.978065 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d29d9533-48d6-4314-8bab-835c6804dcd6\") " pod="openstack/glance-default-external-api-0" Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.028060 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.200764 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.695317 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 10:01:51 crc kubenswrapper[4827]: W0126 10:01:51.712202 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod792cbe2a_cbf2_48f0_8eac_3c3d5b91538a.slice/crio-1515c6c96b45036d9bc96d249b90fbef52737621e6899f657324435e35b05c7b WatchSource:0}: Error finding container 1515c6c96b45036d9bc96d249b90fbef52737621e6899f657324435e35b05c7b: Status 404 returned error can't find the container with id 1515c6c96b45036d9bc96d249b90fbef52737621e6899f657324435e35b05c7b Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.728550 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012399d3-a8d8-4465-8fa0-4346cc2d9233" path="/var/lib/kubelet/pods/012399d3-a8d8-4465-8fa0-4346cc2d9233/volumes" Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.729368 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043283c0-f10e-4327-922e-d5593e705611" path="/var/lib/kubelet/pods/043283c0-f10e-4327-922e-d5593e705611/volumes" Jan 26 10:01:51 crc kubenswrapper[4827]: I0126 10:01:51.892752 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 10:01:52 crc kubenswrapper[4827]: I0126 10:01:52.533509 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d29d9533-48d6-4314-8bab-835c6804dcd6","Type":"ContainerStarted","Data":"4d0ab80bee547ebb1ca99fe6bef9b4739a00239a5f8f2eb50b2fdd866ebc689a"} Jan 26 10:01:52 crc kubenswrapper[4827]: I0126 10:01:52.536062 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a","Type":"ContainerStarted","Data":"5ae5266640a5f19c43075f4c00aec22e2546f155a3d7a0f95649b41cd0320142"} Jan 26 10:01:52 crc kubenswrapper[4827]: I0126 10:01:52.536086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a","Type":"ContainerStarted","Data":"1515c6c96b45036d9bc96d249b90fbef52737621e6899f657324435e35b05c7b"} Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.265667 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.549182 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d29d9533-48d6-4314-8bab-835c6804dcd6","Type":"ContainerStarted","Data":"0026ca73a9da9253bf8968f85602990e3e9ebca5db5707f214f7e37145aef733"} Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.549229 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d29d9533-48d6-4314-8bab-835c6804dcd6","Type":"ContainerStarted","Data":"fcf70464552c0f375abfd382fff8a0f6296158f0b306b0ccecc3e9e79f27b57d"} Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.552392 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"792cbe2a-cbf2-48f0-8eac-3c3d5b91538a","Type":"ContainerStarted","Data":"b5424a111b9fd70323099b5a64b56fc4cc906a8d85089772ef5bf880f65bb8a2"} Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.575485 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.575466711 podStartE2EDuration="3.575466711s" podCreationTimestamp="2026-01-26 10:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:53.56671621 +0000 UTC m=+3342.215388049" watchObservedRunningTime="2026-01-26 10:01:53.575466711 +0000 UTC m=+3342.224138530" Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.611281 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.611264256 podStartE2EDuration="3.611264256s" podCreationTimestamp="2026-01-26 10:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:01:53.597722883 +0000 UTC m=+3342.246394712" watchObservedRunningTime="2026-01-26 10:01:53.611264256 +0000 UTC m=+3342.259936075" Jan 26 10:01:53 crc kubenswrapper[4827]: I0126 10:01:53.685036 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.514114 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-fdw9j"] Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.515484 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.529560 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.530777 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-lsx5g" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.545434 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fdw9j"] Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.632023 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.633697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlttw\" (UniqueName: \"kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.633771 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.634367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.736254 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.736359 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.736406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.736561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlttw\" (UniqueName: \"kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.743973 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.754105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.756184 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.759622 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlttw\" (UniqueName: \"kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw\") pod \"manila-db-sync-fdw9j\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " pod="openstack/manila-db-sync-fdw9j" Jan 26 10:01:54 crc kubenswrapper[4827]: I0126 10:01:54.846692 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fdw9j" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.029404 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.030775 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.202311 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.203878 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.247598 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.248297 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.249368 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.273558 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.636792 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.636843 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.636856 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:01 crc kubenswrapper[4827]: I0126 10:02:01.636870 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:03 crc kubenswrapper[4827]: W0126 10:02:03.512482 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5324cf16_f36a_4a9c_8b04_3646cab28702.slice/crio-f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b WatchSource:0}: Error finding container f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b: Status 404 returned error can't find the container with id f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b Jan 26 10:02:03 crc kubenswrapper[4827]: I0126 10:02:03.517320 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fdw9j"] Jan 26 10:02:03 crc kubenswrapper[4827]: I0126 10:02:03.659999 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fdw9j" event={"ID":"5324cf16-f36a-4a9c-8b04-3646cab28702","Type":"ContainerStarted","Data":"f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b"} Jan 26 10:02:03 crc kubenswrapper[4827]: I0126 10:02:03.660043 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:02:03 crc kubenswrapper[4827]: I0126 10:02:03.660063 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.147383 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.148189 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh575h548hf4h67h548hbh55fh649h657h5dch9ch665h86h568h8h5b9h558h547h69h646h677h55bh6fh5dhbh59hc5h694h65bhfh645q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msfsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-797576d449-dcqq6_openstack(1ab57369-6c16-42b2-b765-b0e18fb182ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.150565 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-797576d449-dcqq6" podUID="1ab57369-6c16-42b2-b765-b0e18fb182ac" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.338078 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.338275 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n545hf4h5cbh55chd8h5fhb9h596h5d7hbh56bh654h58ch549h55dh574h58fh689h97h84h6fh5dh685h595h695h5bbhc8h548h6dh55fh56h5fdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlphq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-747bb7697c-vkxjn_openstack(aed59d22-b784-469b-b8f0-a2ccdc1cc096): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.454005 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.454254 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nch5d8hbbh568h559h69h66h54h5c5h8bhffh5fbh554h8ch688h668h68ch6ch556h67hbh589h678h5c5h5f8h597h55hf7hbch575h597h669q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hd8vf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dfbfd7c96-88kv4_openstack(78869a93-5b51-40d0-9366-a8bada4c394b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 10:02:04 crc kubenswrapper[4827]: E0126 10:02:04.800939 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-747bb7697c-vkxjn" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.029867 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:02:05 crc kubenswrapper[4827]: E0126 10:02:05.044197 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.121347 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs\") pod \"1ab57369-6c16-42b2-b765-b0e18fb182ac\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.121492 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msfsn\" (UniqueName: \"kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn\") pod \"1ab57369-6c16-42b2-b765-b0e18fb182ac\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.121546 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data\") pod \"1ab57369-6c16-42b2-b765-b0e18fb182ac\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.121598 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key\") pod \"1ab57369-6c16-42b2-b765-b0e18fb182ac\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.121689 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts\") pod \"1ab57369-6c16-42b2-b765-b0e18fb182ac\" (UID: \"1ab57369-6c16-42b2-b765-b0e18fb182ac\") " Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.122724 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data" (OuterVolumeSpecName: "config-data") pod "1ab57369-6c16-42b2-b765-b0e18fb182ac" (UID: "1ab57369-6c16-42b2-b765-b0e18fb182ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.123368 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts" (OuterVolumeSpecName: "scripts") pod "1ab57369-6c16-42b2-b765-b0e18fb182ac" (UID: "1ab57369-6c16-42b2-b765-b0e18fb182ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.124511 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs" (OuterVolumeSpecName: "logs") pod "1ab57369-6c16-42b2-b765-b0e18fb182ac" (UID: "1ab57369-6c16-42b2-b765-b0e18fb182ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.127650 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn" (OuterVolumeSpecName: "kube-api-access-msfsn") pod "1ab57369-6c16-42b2-b765-b0e18fb182ac" (UID: "1ab57369-6c16-42b2-b765-b0e18fb182ac"). InnerVolumeSpecName "kube-api-access-msfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.127932 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ab57369-6c16-42b2-b765-b0e18fb182ac" (UID: "1ab57369-6c16-42b2-b765-b0e18fb182ac"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.224630 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ab57369-6c16-42b2-b765-b0e18fb182ac-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.224673 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msfsn\" (UniqueName: \"kubernetes.io/projected/1ab57369-6c16-42b2-b765-b0e18fb182ac-kube-api-access-msfsn\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.224690 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.224700 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ab57369-6c16-42b2-b765-b0e18fb182ac-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.224710 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab57369-6c16-42b2-b765-b0e18fb182ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.688744 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerStarted","Data":"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b"} Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.693430 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747bb7697c-vkxjn" event={"ID":"aed59d22-b784-469b-b8f0-a2ccdc1cc096","Type":"ContainerStarted","Data":"89374ee3d7c322c73c4148646892aa1f3edfa5580220cfd4eb7a99ec25129c32"} Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.693561 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-747bb7697c-vkxjn" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" containerName="horizon" containerID="cri-o://89374ee3d7c322c73c4148646892aa1f3edfa5580220cfd4eb7a99ec25129c32" gracePeriod=30 Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.695992 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8d4867b4-j5kkp" event={"ID":"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d","Type":"ContainerStarted","Data":"2e82adca0baf75f11e29ec5de5a701416f7dc2a2261266967ea80bc8c5918da0"} Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.696027 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8d4867b4-j5kkp" event={"ID":"4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d","Type":"ContainerStarted","Data":"c986ef61d956a7f39866b3b7816e229e6901db87e37a7168f69333fd6582aa49"} Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.703805 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797576d449-dcqq6" Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.738144 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797576d449-dcqq6" event={"ID":"1ab57369-6c16-42b2-b765-b0e18fb182ac","Type":"ContainerDied","Data":"2f9154745f4107f58b861530f436864cb03b545e867e434e7f2ec0458487f969"} Jan 26 10:02:05 crc kubenswrapper[4827]: I0126 10:02:05.951309 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8d4867b4-j5kkp" podStartSLOduration=3.693990621 podStartE2EDuration="20.951292474s" podCreationTimestamp="2026-01-26 10:01:45 +0000 UTC" firstStartedPulling="2026-01-26 10:01:47.070452387 +0000 UTC m=+3335.719124206" lastFinishedPulling="2026-01-26 10:02:04.32775424 +0000 UTC m=+3352.976426059" observedRunningTime="2026-01-26 10:02:05.759171776 +0000 UTC m=+3354.407843595" watchObservedRunningTime="2026-01-26 10:02:05.951292474 +0000 UTC m=+3354.599964293" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.007694 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.016141 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-797576d449-dcqq6"] Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.198784 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.198825 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.363246 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.363559 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.713541 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerStarted","Data":"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425"} Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.739495 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dfbfd7c96-88kv4" podStartSLOduration=-9223372015.115301 podStartE2EDuration="21.739474106s" podCreationTimestamp="2026-01-26 10:01:45 +0000 UTC" firstStartedPulling="2026-01-26 10:01:46.918937737 +0000 UTC m=+3335.567609556" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:02:06.730443198 +0000 UTC m=+3355.379115017" watchObservedRunningTime="2026-01-26 10:02:06.739474106 +0000 UTC m=+3355.388145935" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.921885 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.921984 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.928506 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.928622 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:02:06 crc kubenswrapper[4827]: I0126 10:02:06.959734 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 10:02:07 crc kubenswrapper[4827]: I0126 10:02:07.292156 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 10:02:07 crc kubenswrapper[4827]: I0126 10:02:07.734373 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab57369-6c16-42b2-b765-b0e18fb182ac" path="/var/lib/kubelet/pods/1ab57369-6c16-42b2-b765-b0e18fb182ac/volumes" Jan 26 10:02:11 crc kubenswrapper[4827]: I0126 10:02:11.800970 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fdw9j" event={"ID":"5324cf16-f36a-4a9c-8b04-3646cab28702","Type":"ContainerStarted","Data":"6fc23479ae0c65ff87718b9cb3e239580513082941d6c825193f3380058317ba"} Jan 26 10:02:11 crc kubenswrapper[4827]: I0126 10:02:11.824341 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-fdw9j" podStartSLOduration=10.276082588 podStartE2EDuration="17.824326724s" podCreationTimestamp="2026-01-26 10:01:54 +0000 UTC" firstStartedPulling="2026-01-26 10:02:03.517448088 +0000 UTC m=+3352.166119907" lastFinishedPulling="2026-01-26 10:02:11.065692224 +0000 UTC m=+3359.714364043" observedRunningTime="2026-01-26 10:02:11.819750358 +0000 UTC m=+3360.468422177" watchObservedRunningTime="2026-01-26 10:02:11.824326724 +0000 UTC m=+3360.472998543" Jan 26 10:02:14 crc kubenswrapper[4827]: I0126 10:02:14.166234 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:02:16 crc kubenswrapper[4827]: I0126 10:02:16.212255 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 26 10:02:16 crc kubenswrapper[4827]: I0126 10:02:16.372433 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8d4867b4-j5kkp" podUID="4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Jan 26 10:02:26 crc kubenswrapper[4827]: I0126 10:02:26.197844 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 26 10:02:26 crc kubenswrapper[4827]: I0126 10:02:26.363509 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8d4867b4-j5kkp" podUID="4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Jan 26 10:02:30 crc kubenswrapper[4827]: I0126 10:02:30.964789 4827 generic.go:334] "Generic (PLEG): container finished" podID="5324cf16-f36a-4a9c-8b04-3646cab28702" containerID="6fc23479ae0c65ff87718b9cb3e239580513082941d6c825193f3380058317ba" exitCode=0 Jan 26 10:02:30 crc kubenswrapper[4827]: I0126 10:02:30.964850 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fdw9j" event={"ID":"5324cf16-f36a-4a9c-8b04-3646cab28702","Type":"ContainerDied","Data":"6fc23479ae0c65ff87718b9cb3e239580513082941d6c825193f3380058317ba"} Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.535721 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fdw9j" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.640910 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlttw\" (UniqueName: \"kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw\") pod \"5324cf16-f36a-4a9c-8b04-3646cab28702\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.641036 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data\") pod \"5324cf16-f36a-4a9c-8b04-3646cab28702\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.642008 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data\") pod \"5324cf16-f36a-4a9c-8b04-3646cab28702\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.642206 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle\") pod \"5324cf16-f36a-4a9c-8b04-3646cab28702\" (UID: \"5324cf16-f36a-4a9c-8b04-3646cab28702\") " Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.646730 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "5324cf16-f36a-4a9c-8b04-3646cab28702" (UID: "5324cf16-f36a-4a9c-8b04-3646cab28702"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.647918 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw" (OuterVolumeSpecName: "kube-api-access-dlttw") pod "5324cf16-f36a-4a9c-8b04-3646cab28702" (UID: "5324cf16-f36a-4a9c-8b04-3646cab28702"). InnerVolumeSpecName "kube-api-access-dlttw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.648781 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data" (OuterVolumeSpecName: "config-data") pod "5324cf16-f36a-4a9c-8b04-3646cab28702" (UID: "5324cf16-f36a-4a9c-8b04-3646cab28702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.673997 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5324cf16-f36a-4a9c-8b04-3646cab28702" (UID: "5324cf16-f36a-4a9c-8b04-3646cab28702"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.744334 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.744367 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.744382 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlttw\" (UniqueName: \"kubernetes.io/projected/5324cf16-f36a-4a9c-8b04-3646cab28702-kube-api-access-dlttw\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.744393 4827 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5324cf16-f36a-4a9c-8b04-3646cab28702-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.986397 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fdw9j" Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.986404 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fdw9j" event={"ID":"5324cf16-f36a-4a9c-8b04-3646cab28702","Type":"ContainerDied","Data":"f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b"} Jan 26 10:02:32 crc kubenswrapper[4827]: I0126 10:02:32.987024 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86cfbdb447ff7088b7b7054f95598d5b1420523acf35d594f8d8735d08c302b" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.289730 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: E0126 10:02:33.290104 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324cf16-f36a-4a9c-8b04-3646cab28702" containerName="manila-db-sync" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.290120 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324cf16-f36a-4a9c-8b04-3646cab28702" containerName="manila-db-sync" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.290304 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5324cf16-f36a-4a9c-8b04-3646cab28702" containerName="manila-db-sync" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.291324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.293238 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-lsx5g" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.293263 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.293393 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.293461 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.305163 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mkw\" (UniqueName: \"kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363144 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363168 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.363228 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.447657 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.452386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.461064 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.481100 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.481434 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mkw\" (UniqueName: \"kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.481571 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.481838 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.482794 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.483081 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.487181 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.500845 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.508294 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.509177 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.512916 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.529407 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mkw\" (UniqueName: \"kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw\") pod \"manila-scheduler-0\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.545126 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.592453 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593084 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593227 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593337 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593698 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvht5\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.593881 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.594006 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.614879 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595b86679f-j4gzs"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.617224 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.637038 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595b86679f-j4gzs"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.675220 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.696702 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvht5\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.696977 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697085 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697204 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697279 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697346 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697412 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697486 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.697624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.704787 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.705230 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.709446 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.714075 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.718852 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.729080 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.733986 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvht5\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5\") pod \"manila-share-share1-0\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.790520 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.798437 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.799891 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.799982 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-config\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.800008 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9294\" (UniqueName: \"kubernetes.io/projected/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-kube-api-access-n9294\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.800029 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.800047 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-dns-svc\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.800084 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.801536 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.814682 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.887290 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906066 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906156 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906208 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906270 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906307 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5c7s\" (UniqueName: \"kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.906330 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907185 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-config\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907234 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9294\" (UniqueName: \"kubernetes.io/projected/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-kube-api-access-n9294\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907254 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907272 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-dns-svc\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907312 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.907869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.908384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-config\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.908879 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.908976 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-dns-svc\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.937506 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9294\" (UniqueName: \"kubernetes.io/projected/1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006-kube-api-access-n9294\") pod \"dnsmasq-dns-595b86679f-j4gzs\" (UID: \"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006\") " pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:33 crc kubenswrapper[4827]: I0126 10:02:33.939662 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012435 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5c7s\" (UniqueName: \"kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012485 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012811 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.012848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.014614 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.015897 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.017929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.023883 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.026150 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.032212 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.042232 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5c7s\" (UniqueName: \"kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s\") pod \"manila-api-0\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.130067 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.345879 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.701875 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595b86679f-j4gzs"] Jan 26 10:02:34 crc kubenswrapper[4827]: I0126 10:02:34.834293 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:02:34 crc kubenswrapper[4827]: W0126 10:02:34.852038 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77321af3_9f6d_4f0f_a89e_b6dba5d0280d.slice/crio-36e8353ad42f5d74d337fb03e146cd96daf7b32a9308c94805cdce49be79028a WatchSource:0}: Error finding container 36e8353ad42f5d74d337fb03e146cd96daf7b32a9308c94805cdce49be79028a: Status 404 returned error can't find the container with id 36e8353ad42f5d74d337fb03e146cd96daf7b32a9308c94805cdce49be79028a Jan 26 10:02:35 crc kubenswrapper[4827]: I0126 10:02:35.029461 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerStarted","Data":"3e2e6cc5014b6901d8969db68db6e3525989627bb2e1b22cf0785e5710bfea92"} Jan 26 10:02:35 crc kubenswrapper[4827]: I0126 10:02:35.030754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerStarted","Data":"36e8353ad42f5d74d337fb03e146cd96daf7b32a9308c94805cdce49be79028a"} Jan 26 10:02:35 crc kubenswrapper[4827]: I0126 10:02:35.030850 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:35 crc kubenswrapper[4827]: I0126 10:02:35.034929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" event={"ID":"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006","Type":"ContainerStarted","Data":"334e497af40092b9d98132eb53df1816dbc0219b08f114dde96d76d253f42f55"} Jan 26 10:02:35 crc kubenswrapper[4827]: W0126 10:02:35.131566 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a9df04_a818_4c1f_a5b8_a18a1b3b69ca.slice/crio-f42c6423579454351517f53c13496bfa737bb99346d35cdff7bced22c9b3dc39 WatchSource:0}: Error finding container f42c6423579454351517f53c13496bfa737bb99346d35cdff7bced22c9b3dc39: Status 404 returned error can't find the container with id f42c6423579454351517f53c13496bfa737bb99346d35cdff7bced22c9b3dc39 Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.081095 4827 generic.go:334] "Generic (PLEG): container finished" podID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" containerID="89374ee3d7c322c73c4148646892aa1f3edfa5580220cfd4eb7a99ec25129c32" exitCode=137 Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.081189 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747bb7697c-vkxjn" event={"ID":"aed59d22-b784-469b-b8f0-a2ccdc1cc096","Type":"ContainerDied","Data":"89374ee3d7c322c73c4148646892aa1f3edfa5580220cfd4eb7a99ec25129c32"} Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.084127 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerStarted","Data":"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9"} Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.084158 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerStarted","Data":"f42c6423579454351517f53c13496bfa737bb99346d35cdff7bced22c9b3dc39"} Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.090027 4827 generic.go:334] "Generic (PLEG): container finished" podID="1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006" containerID="06fc1c773d79e937dca0ffde82daf494c7571a1d0b44a304cc08966872461fb2" exitCode=0 Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.090153 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" event={"ID":"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006","Type":"ContainerDied","Data":"06fc1c773d79e937dca0ffde82daf494c7571a1d0b44a304cc08966872461fb2"} Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.111408 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerStarted","Data":"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd"} Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.797738 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.834504 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.894399 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlphq\" (UniqueName: \"kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq\") pod \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.894836 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts\") pod \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.894984 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data\") pod \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.895053 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key\") pod \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.895094 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs\") pod \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\" (UID: \"aed59d22-b784-469b-b8f0-a2ccdc1cc096\") " Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.896879 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs" (OuterVolumeSpecName: "logs") pod "aed59d22-b784-469b-b8f0-a2ccdc1cc096" (UID: "aed59d22-b784-469b-b8f0-a2ccdc1cc096"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.907255 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aed59d22-b784-469b-b8f0-a2ccdc1cc096" (UID: "aed59d22-b784-469b-b8f0-a2ccdc1cc096"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.907291 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq" (OuterVolumeSpecName: "kube-api-access-wlphq") pod "aed59d22-b784-469b-b8f0-a2ccdc1cc096" (UID: "aed59d22-b784-469b-b8f0-a2ccdc1cc096"). InnerVolumeSpecName "kube-api-access-wlphq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.927353 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data" (OuterVolumeSpecName: "config-data") pod "aed59d22-b784-469b-b8f0-a2ccdc1cc096" (UID: "aed59d22-b784-469b-b8f0-a2ccdc1cc096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.948607 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts" (OuterVolumeSpecName: "scripts") pod "aed59d22-b784-469b-b8f0-a2ccdc1cc096" (UID: "aed59d22-b784-469b-b8f0-a2ccdc1cc096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.997260 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.997289 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aed59d22-b784-469b-b8f0-a2ccdc1cc096-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.997301 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aed59d22-b784-469b-b8f0-a2ccdc1cc096-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.997313 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed59d22-b784-469b-b8f0-a2ccdc1cc096-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:36 crc kubenswrapper[4827]: I0126 10:02:36.997321 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlphq\" (UniqueName: \"kubernetes.io/projected/aed59d22-b784-469b-b8f0-a2ccdc1cc096-kube-api-access-wlphq\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.126325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerStarted","Data":"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3"} Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.136626 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747bb7697c-vkxjn" event={"ID":"aed59d22-b784-469b-b8f0-a2ccdc1cc096","Type":"ContainerDied","Data":"17d0aef3f8b5d8d01de0de1fd76a777f7709052ae5094e56d413713c0d47ea16"} Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.136687 4827 scope.go:117] "RemoveContainer" containerID="89374ee3d7c322c73c4148646892aa1f3edfa5580220cfd4eb7a99ec25129c32" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.136829 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747bb7697c-vkxjn" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.161831 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerStarted","Data":"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922"} Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.166299 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.171258 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.383442671 podStartE2EDuration="4.171236002s" podCreationTimestamp="2026-01-26 10:02:33 +0000 UTC" firstStartedPulling="2026-01-26 10:02:34.378030527 +0000 UTC m=+3383.026702346" lastFinishedPulling="2026-01-26 10:02:35.165823858 +0000 UTC m=+3383.814495677" observedRunningTime="2026-01-26 10:02:37.157584487 +0000 UTC m=+3385.806256306" watchObservedRunningTime="2026-01-26 10:02:37.171236002 +0000 UTC m=+3385.819907821" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.225468 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" event={"ID":"1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006","Type":"ContainerStarted","Data":"d5d795cbad20bfbf791e65fc34c83768b83620c5c12c28600e62f50ea54f6454"} Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.226014 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.233914 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.242484 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-747bb7697c-vkxjn"] Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.278350 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.27832521 podStartE2EDuration="4.27832521s" podCreationTimestamp="2026-01-26 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:02:37.254137325 +0000 UTC m=+3385.902809134" watchObservedRunningTime="2026-01-26 10:02:37.27832521 +0000 UTC m=+3385.926997019" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.303463 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" podStartSLOduration=4.303440371 podStartE2EDuration="4.303440371s" podCreationTimestamp="2026-01-26 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:02:37.300993144 +0000 UTC m=+3385.949664963" watchObservedRunningTime="2026-01-26 10:02:37.303440371 +0000 UTC m=+3385.952112190" Jan 26 10:02:37 crc kubenswrapper[4827]: I0126 10:02:37.712329 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" path="/var/lib/kubelet/pods/aed59d22-b784-469b-b8f0-a2ccdc1cc096/volumes" Jan 26 10:02:38 crc kubenswrapper[4827]: I0126 10:02:38.245193 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api-log" containerID="cri-o://7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" gracePeriod=30 Jan 26 10:02:38 crc kubenswrapper[4827]: I0126 10:02:38.245532 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api" containerID="cri-o://87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" gracePeriod=30 Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.084237 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259658 4827 generic.go:334] "Generic (PLEG): container finished" podID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerID="87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" exitCode=0 Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259693 4827 generic.go:334] "Generic (PLEG): container finished" podID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerID="7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" exitCode=143 Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259739 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerDied","Data":"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922"} Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259782 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerDied","Data":"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9"} Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259796 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca","Type":"ContainerDied","Data":"f42c6423579454351517f53c13496bfa737bb99346d35cdff7bced22c9b3dc39"} Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.259835 4827 scope.go:117] "RemoveContainer" containerID="87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.260012 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265018 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265168 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265212 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265253 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5c7s\" (UniqueName: \"kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265328 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265374 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.265435 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.266117 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs" (OuterVolumeSpecName: "logs") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.266175 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.266563 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.266577 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.270978 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts" (OuterVolumeSpecName: "scripts") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.301864 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.302013 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s" (OuterVolumeSpecName: "kube-api-access-x5c7s") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "kube-api-access-x5c7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.310087 4827 scope.go:117] "RemoveContainer" containerID="7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" Jan 26 10:02:39 crc kubenswrapper[4827]: E0126 10:02:39.345440 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data podName:c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca nodeName:}" failed. No retries permitted until 2026-01-26 10:02:39.845409801 +0000 UTC m=+3388.494081620 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca") : error deleting /var/lib/kubelet/pods/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca/volume-subpaths: remove /var/lib/kubelet/pods/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca/volume-subpaths: no such file or directory Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.350765 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.356833 4827 scope.go:117] "RemoveContainer" containerID="87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" Jan 26 10:02:39 crc kubenswrapper[4827]: E0126 10:02:39.358147 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922\": container with ID starting with 87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922 not found: ID does not exist" containerID="87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.358185 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922"} err="failed to get container status \"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922\": rpc error: code = NotFound desc = could not find container \"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922\": container with ID starting with 87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922 not found: ID does not exist" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.358223 4827 scope.go:117] "RemoveContainer" containerID="7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" Jan 26 10:02:39 crc kubenswrapper[4827]: E0126 10:02:39.374624 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9\": container with ID starting with 7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9 not found: ID does not exist" containerID="7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.374687 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9"} err="failed to get container status \"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9\": rpc error: code = NotFound desc = could not find container \"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9\": container with ID starting with 7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9 not found: ID does not exist" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.374713 4827 scope.go:117] "RemoveContainer" containerID="87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.376450 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.376473 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.376484 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5c7s\" (UniqueName: \"kubernetes.io/projected/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-kube-api-access-x5c7s\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.376494 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.380744 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922"} err="failed to get container status \"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922\": rpc error: code = NotFound desc = could not find container \"87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922\": container with ID starting with 87cf3ba34cc8fccaab8c1bfa22b327885e27610fa59517b1f46c17ac41b00922 not found: ID does not exist" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.380774 4827 scope.go:117] "RemoveContainer" containerID="7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.382985 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9"} err="failed to get container status \"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9\": rpc error: code = NotFound desc = could not find container \"7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9\": container with ID starting with 7fcc3de9b945e3693b8c3f49b508ac2792f9486cedd6c79eebd208d87eff23e9 not found: ID does not exist" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.725475 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.792244 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.885144 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") pod \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\" (UID: \"c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca\") " Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.892552 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data" (OuterVolumeSpecName: "config-data") pod "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" (UID: "c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:39 crc kubenswrapper[4827]: I0126 10:02:39.988124 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.296966 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.308187 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.320689 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:40 crc kubenswrapper[4827]: E0126 10:02:40.321073 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321089 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api" Jan 26 10:02:40 crc kubenswrapper[4827]: E0126 10:02:40.321110 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" containerName="horizon" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321118 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" containerName="horizon" Jan 26 10:02:40 crc kubenswrapper[4827]: E0126 10:02:40.321125 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api-log" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321131 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api-log" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321315 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed59d22-b784-469b-b8f0-a2ccdc1cc096" containerName="horizon" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321334 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api-log" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.321350 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" containerName="manila-api" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.322364 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.328073 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.328386 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.328553 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.330989 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.410865 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.411477 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdcq\" (UniqueName: \"kubernetes.io/projected/5176c3b1-983f-4339-aa88-18ed0df10566-kube-api-access-5qdcq\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.411698 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5176c3b1-983f-4339-aa88-18ed0df10566-etc-machine-id\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.411822 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data-custom\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.411957 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5176c3b1-983f-4339-aa88-18ed0df10566-logs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.412069 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-public-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.412141 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-scripts\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.412207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.412272 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514402 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdcq\" (UniqueName: \"kubernetes.io/projected/5176c3b1-983f-4339-aa88-18ed0df10566-kube-api-access-5qdcq\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5176c3b1-983f-4339-aa88-18ed0df10566-etc-machine-id\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data-custom\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514555 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5176c3b1-983f-4339-aa88-18ed0df10566-logs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514576 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-public-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-scripts\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514632 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514680 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.514775 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.515203 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5176c3b1-983f-4339-aa88-18ed0df10566-etc-machine-id\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.517840 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5176c3b1-983f-4339-aa88-18ed0df10566-logs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.523771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.545437 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-scripts\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.546846 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.546917 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-config-data-custom\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.547343 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-public-tls-certs\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.551544 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5176c3b1-983f-4339-aa88-18ed0df10566-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.593981 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdcq\" (UniqueName: \"kubernetes.io/projected/5176c3b1-983f-4339-aa88-18ed0df10566-kube-api-access-5qdcq\") pod \"manila-api-0\" (UID: \"5176c3b1-983f-4339-aa88-18ed0df10566\") " pod="openstack/manila-api-0" Jan 26 10:02:40 crc kubenswrapper[4827]: I0126 10:02:40.659744 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 10:02:41 crc kubenswrapper[4827]: I0126 10:02:41.350335 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 10:02:41 crc kubenswrapper[4827]: W0126 10:02:41.422116 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5176c3b1_983f_4339_aa88_18ed0df10566.slice/crio-a76e8f35c84d1556d2f8c0da7d28de0c613b35cc913c4c55ac3e70544f80e50f WatchSource:0}: Error finding container a76e8f35c84d1556d2f8c0da7d28de0c613b35cc913c4c55ac3e70544f80e50f: Status 404 returned error can't find the container with id a76e8f35c84d1556d2f8c0da7d28de0c613b35cc913c4c55ac3e70544f80e50f Jan 26 10:02:41 crc kubenswrapper[4827]: I0126 10:02:41.771786 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca" path="/var/lib/kubelet/pods/c1a9df04-a818-4c1f-a5b8-a18a1b3b69ca/volumes" Jan 26 10:02:41 crc kubenswrapper[4827]: I0126 10:02:41.959933 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8d4867b4-j5kkp" Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.024566 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.024792 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" containerID="cri-o://8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b" gracePeriod=30 Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.025045 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon-log" containerID="cri-o://72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425" gracePeriod=30 Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.049442 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.268447 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.268497 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.304597 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5176c3b1-983f-4339-aa88-18ed0df10566","Type":"ContainerStarted","Data":"19eafbfcf09202128646494d8974301681dc356beed67df1e983bb77f876bc36"} Jan 26 10:02:42 crc kubenswrapper[4827]: I0126 10:02:42.304666 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5176c3b1-983f-4339-aa88-18ed0df10566","Type":"ContainerStarted","Data":"a76e8f35c84d1556d2f8c0da7d28de0c613b35cc913c4c55ac3e70544f80e50f"} Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.319998 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5176c3b1-983f-4339-aa88-18ed0df10566","Type":"ContainerStarted","Data":"6a4faa928311417b3e7ee9da56c8507080ec06d3467d4257fec8518bacdf84b5"} Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.320497 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.343911 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.34389449 podStartE2EDuration="3.34389449s" podCreationTimestamp="2026-01-26 10:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:02:43.342301845 +0000 UTC m=+3391.990973654" watchObservedRunningTime="2026-01-26 10:02:43.34389449 +0000 UTC m=+3391.992566309" Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.675968 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.942297 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595b86679f-j4gzs" Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.997767 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 10:02:43 crc kubenswrapper[4827]: I0126 10:02:43.997983 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="dnsmasq-dns" containerID="cri-o://ea3e157230cdc6d05efde7611eb7ba5b80d4f4c7d51b64082dbf18e92aa450d9" gracePeriod=10 Jan 26 10:02:44 crc kubenswrapper[4827]: I0126 10:02:44.333442 4827 generic.go:334] "Generic (PLEG): container finished" podID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerID="ea3e157230cdc6d05efde7611eb7ba5b80d4f4c7d51b64082dbf18e92aa450d9" exitCode=0 Jan 26 10:02:44 crc kubenswrapper[4827]: I0126 10:02:44.333554 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" event={"ID":"db1d206a-dafd-4abb-8163-8865e5ebdcd6","Type":"ContainerDied","Data":"ea3e157230cdc6d05efde7611eb7ba5b80d4f4c7d51b64082dbf18e92aa450d9"} Jan 26 10:02:45 crc kubenswrapper[4827]: I0126 10:02:45.024863 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Jan 26 10:02:45 crc kubenswrapper[4827]: I0126 10:02:45.499131 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54264->10.217.0.244:8443: read: connection reset by peer" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.197814 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.355627 4827 generic.go:334] "Generic (PLEG): container finished" podID="78869a93-5b51-40d0-9366-a8bada4c394b" containerID="8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b" exitCode=0 Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.355684 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerDied","Data":"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b"} Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.612485 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750478 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skn4c\" (UniqueName: \"kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750562 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750654 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750696 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750801 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.750832 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb\") pod \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\" (UID: \"db1d206a-dafd-4abb-8163-8865e5ebdcd6\") " Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.763600 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c" (OuterVolumeSpecName: "kube-api-access-skn4c") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "kube-api-access-skn4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.814177 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config" (OuterVolumeSpecName: "config") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.821630 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.822910 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.833254 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.857207 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.857239 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.857251 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skn4c\" (UniqueName: \"kubernetes.io/projected/db1d206a-dafd-4abb-8163-8865e5ebdcd6-kube-api-access-skn4c\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.857282 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.857291 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.859236 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db1d206a-dafd-4abb-8163-8865e5ebdcd6" (UID: "db1d206a-dafd-4abb-8163-8865e5ebdcd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:02:46 crc kubenswrapper[4827]: I0126 10:02:46.958788 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db1d206a-dafd-4abb-8163-8865e5ebdcd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.370788 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" event={"ID":"db1d206a-dafd-4abb-8163-8865e5ebdcd6","Type":"ContainerDied","Data":"2560fa89255eafa503a5fc09b34b962bc225375c9071973041762f17b5dfdffa"} Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.370822 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-hw9ld" Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.370840 4827 scope.go:117] "RemoveContainer" containerID="ea3e157230cdc6d05efde7611eb7ba5b80d4f4c7d51b64082dbf18e92aa450d9" Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.375528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerStarted","Data":"9954dc7d592f9f81ed602f5c3abe57c43fa6ef4189ed1a3fa68a08c3e8805c6e"} Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.395374 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.395656 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-central-agent" containerID="cri-o://d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e" gracePeriod=30 Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.395717 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-notification-agent" containerID="cri-o://cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a" gracePeriod=30 Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.395750 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="proxy-httpd" containerID="cri-o://bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a" gracePeriod=30 Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.395704 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="sg-core" containerID="cri-o://fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6" gracePeriod=30 Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.405385 4827 scope.go:117] "RemoveContainer" containerID="383970e8d1add77ef8bf63f2007dd12ca812bf38bda27cb82733c17026ea5421" Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.455037 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.474275 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-hw9ld"] Jan 26 10:02:47 crc kubenswrapper[4827]: I0126 10:02:47.719280 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" path="/var/lib/kubelet/pods/db1d206a-dafd-4abb-8163-8865e5ebdcd6/volumes" Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.385090 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerStarted","Data":"745055165f33589488884e1123f7a6af631eae328977347a1dbffa53fa7f842f"} Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388070 4827 generic.go:334] "Generic (PLEG): container finished" podID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerID="bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a" exitCode=0 Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388097 4827 generic.go:334] "Generic (PLEG): container finished" podID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerID="fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6" exitCode=2 Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388106 4827 generic.go:334] "Generic (PLEG): container finished" podID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerID="cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a" exitCode=0 Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388131 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerDied","Data":"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a"} Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerDied","Data":"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6"} Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.388212 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerDied","Data":"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a"} Jan 26 10:02:48 crc kubenswrapper[4827]: I0126 10:02:48.417925 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.9390410019999997 podStartE2EDuration="15.417906889s" podCreationTimestamp="2026-01-26 10:02:33 +0000 UTC" firstStartedPulling="2026-01-26 10:02:34.856723552 +0000 UTC m=+3383.505395371" lastFinishedPulling="2026-01-26 10:02:46.335589439 +0000 UTC m=+3394.984261258" observedRunningTime="2026-01-26 10:02:48.409813276 +0000 UTC m=+3397.058485095" watchObservedRunningTime="2026-01-26 10:02:48.417906889 +0000 UTC m=+3397.066578708" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.059682 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.169035 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.169859 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.169929 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.169997 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.170156 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pjmj\" (UniqueName: \"kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.170249 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.170344 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.170425 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml\") pod \"5cfbff20-6cbf-4e42-a01a-31418745e44e\" (UID: \"5cfbff20-6cbf-4e42-a01a-31418745e44e\") " Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.169626 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.176474 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.180236 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj" (OuterVolumeSpecName: "kube-api-access-7pjmj") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "kube-api-access-7pjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.180903 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts" (OuterVolumeSpecName: "scripts") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.213386 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.252783 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274187 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274222 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pjmj\" (UniqueName: \"kubernetes.io/projected/5cfbff20-6cbf-4e42-a01a-31418745e44e-kube-api-access-7pjmj\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274236 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274246 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274257 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274267 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cfbff20-6cbf-4e42-a01a-31418745e44e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.274349 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.302204 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data" (OuterVolumeSpecName: "config-data") pod "5cfbff20-6cbf-4e42-a01a-31418745e44e" (UID: "5cfbff20-6cbf-4e42-a01a-31418745e44e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.376452 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.376517 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbff20-6cbf-4e42-a01a-31418745e44e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.421257 4827 generic.go:334] "Generic (PLEG): container finished" podID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerID="d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e" exitCode=0 Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.421304 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.421320 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerDied","Data":"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e"} Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.421379 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cfbff20-6cbf-4e42-a01a-31418745e44e","Type":"ContainerDied","Data":"ff1546e18932dfdb956fb53cf62c37e36e13773b31ebe5e7db2f3127c514c77d"} Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.421401 4827 scope.go:117] "RemoveContainer" containerID="bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.506151 4827 scope.go:117] "RemoveContainer" containerID="fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.508760 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.518817 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543070 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543576 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="init" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543588 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="init" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543610 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-central-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543617 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-central-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543646 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-notification-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543653 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-notification-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543665 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="sg-core" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543672 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="sg-core" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543685 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="proxy-httpd" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543691 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="proxy-httpd" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.543709 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="dnsmasq-dns" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543715 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="dnsmasq-dns" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543889 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-central-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543908 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="ceilometer-notification-agent" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543918 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="proxy-httpd" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543927 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" containerName="sg-core" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.543935 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d206a-dafd-4abb-8163-8865e5ebdcd6" containerName="dnsmasq-dns" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.545740 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.551252 4827 scope.go:117] "RemoveContainer" containerID="cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.551500 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.551507 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.558201 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.562617 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.600557 4827 scope.go:117] "RemoveContainer" containerID="d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.632216 4827 scope.go:117] "RemoveContainer" containerID="bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.633395 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a\": container with ID starting with bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a not found: ID does not exist" containerID="bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.633864 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a"} err="failed to get container status \"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a\": rpc error: code = NotFound desc = could not find container \"bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a\": container with ID starting with bb0b8e5544823749fcfb6fa0fdb05a6180a12ec4dffbe1155da16ea38f953e8a not found: ID does not exist" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.634045 4827 scope.go:117] "RemoveContainer" containerID="fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.635029 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6\": container with ID starting with fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6 not found: ID does not exist" containerID="fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.635085 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6"} err="failed to get container status \"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6\": rpc error: code = NotFound desc = could not find container \"fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6\": container with ID starting with fd9cea561e0ea3c074810e0d9f7c52842ac440e522729e3c888643551156a7a6 not found: ID does not exist" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.635111 4827 scope.go:117] "RemoveContainer" containerID="cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.635723 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a\": container with ID starting with cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a not found: ID does not exist" containerID="cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.635761 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a"} err="failed to get container status \"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a\": rpc error: code = NotFound desc = could not find container \"cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a\": container with ID starting with cb2df6dd0e7a1b89425bcff46c0b342d1c4cde532768ac4223e7d0467db0a43a not found: ID does not exist" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.635786 4827 scope.go:117] "RemoveContainer" containerID="d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e" Jan 26 10:02:51 crc kubenswrapper[4827]: E0126 10:02:51.636139 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e\": container with ID starting with d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e not found: ID does not exist" containerID="d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.636171 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e"} err="failed to get container status \"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e\": rpc error: code = NotFound desc = could not find container \"d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e\": container with ID starting with d54d485653fe4e74e65180b313860129d969603622f5855db1980ba5d662274e not found: ID does not exist" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697407 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697466 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-run-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697683 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-log-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697819 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-scripts\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697854 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62drx\" (UniqueName: \"kubernetes.io/projected/788b1f32-de2c-4281-902f-63df02b00cd8-kube-api-access-62drx\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.697982 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.698049 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-config-data\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.713934 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfbff20-6cbf-4e42-a01a-31418745e44e" path="/var/lib/kubelet/pods/5cfbff20-6cbf-4e42-a01a-31418745e44e/volumes" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.803598 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.803669 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-run-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.803768 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.804481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-log-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805015 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-scripts\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62drx\" (UniqueName: \"kubernetes.io/projected/788b1f32-de2c-4281-902f-63df02b00cd8-kube-api-access-62drx\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805397 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-run-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805405 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788b1f32-de2c-4281-902f-63df02b00cd8-log-httpd\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805822 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.805865 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-config-data\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.809319 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.809382 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.809612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.810377 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-scripts\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.812525 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788b1f32-de2c-4281-902f-63df02b00cd8-config-data\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.823172 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62drx\" (UniqueName: \"kubernetes.io/projected/788b1f32-de2c-4281-902f-63df02b00cd8-kube-api-access-62drx\") pod \"ceilometer-0\" (UID: \"788b1f32-de2c-4281-902f-63df02b00cd8\") " pod="openstack/ceilometer-0" Jan 26 10:02:51 crc kubenswrapper[4827]: I0126 10:02:51.876933 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 10:02:52 crc kubenswrapper[4827]: W0126 10:02:52.387416 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod788b1f32_de2c_4281_902f_63df02b00cd8.slice/crio-a51939b301c528beae0552b7a5a82f70fc68e45f2f1508d93f90d4d2edc6e93b WatchSource:0}: Error finding container a51939b301c528beae0552b7a5a82f70fc68e45f2f1508d93f90d4d2edc6e93b: Status 404 returned error can't find the container with id a51939b301c528beae0552b7a5a82f70fc68e45f2f1508d93f90d4d2edc6e93b Jan 26 10:02:52 crc kubenswrapper[4827]: I0126 10:02:52.396079 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 10:02:52 crc kubenswrapper[4827]: I0126 10:02:52.430987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788b1f32-de2c-4281-902f-63df02b00cd8","Type":"ContainerStarted","Data":"a51939b301c528beae0552b7a5a82f70fc68e45f2f1508d93f90d4d2edc6e93b"} Jan 26 10:02:53 crc kubenswrapper[4827]: I0126 10:02:53.888547 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 26 10:02:54 crc kubenswrapper[4827]: I0126 10:02:54.449450 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788b1f32-de2c-4281-902f-63df02b00cd8","Type":"ContainerStarted","Data":"90d0207a5e612efcbbe2096814da440ecd2ee2dfe8cd2bbdfb01e26c60272c0c"} Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.178676 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.180754 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.193030 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.276830 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.277319 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5x6f\" (UniqueName: \"kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.277431 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.379195 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.379246 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5x6f\" (UniqueName: \"kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.379294 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.379805 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.380012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.399715 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5x6f\" (UniqueName: \"kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f\") pod \"certified-operators-7mzph\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.461068 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788b1f32-de2c-4281-902f-63df02b00cd8","Type":"ContainerStarted","Data":"f207d545813bde16137ddb141d1985bc3fbe5373de986c9a9023d9b22e9c241e"} Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.496254 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.570951 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 26 10:02:55 crc kubenswrapper[4827]: I0126 10:02:55.644558 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:56 crc kubenswrapper[4827]: I0126 10:02:56.198164 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 26 10:02:56 crc kubenswrapper[4827]: I0126 10:02:56.470143 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="manila-scheduler" containerID="cri-o://33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd" gracePeriod=30 Jan 26 10:02:56 crc kubenswrapper[4827]: I0126 10:02:56.470229 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="probe" containerID="cri-o://30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3" gracePeriod=30 Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.060168 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:02:58 crc kubenswrapper[4827]: W0126 10:02:58.072527 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf196cd42_2ae0_461f_b726_44aa244c1b03.slice/crio-51769922d9058ca24aa209545f8f568d410d90586309c9424acc28fc292c2cd6 WatchSource:0}: Error finding container 51769922d9058ca24aa209545f8f568d410d90586309c9424acc28fc292c2cd6: Status 404 returned error can't find the container with id 51769922d9058ca24aa209545f8f568d410d90586309c9424acc28fc292c2cd6 Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.511036 4827 generic.go:334] "Generic (PLEG): container finished" podID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerID="30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3" exitCode=0 Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.511311 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerDied","Data":"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3"} Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.513837 4827 generic.go:334] "Generic (PLEG): container finished" podID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerID="e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347" exitCode=0 Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.513899 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerDied","Data":"e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347"} Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.513953 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerStarted","Data":"51769922d9058ca24aa209545f8f568d410d90586309c9424acc28fc292c2cd6"} Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.517322 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788b1f32-de2c-4281-902f-63df02b00cd8","Type":"ContainerStarted","Data":"2c0f5b93527a28c00e5a8865913456b485ed6c51ef4ad804ae37ad4a03d54e99"} Jan 26 10:02:58 crc kubenswrapper[4827]: I0126 10:02:58.994507 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.151957 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152104 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152178 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152257 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8mkw\" (UniqueName: \"kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152307 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.152430 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data\") pod \"2552e567-6df4-445f-9bae-f4718e5c1dd6\" (UID: \"2552e567-6df4-445f-9bae-f4718e5c1dd6\") " Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.153148 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2552e567-6df4-445f-9bae-f4718e5c1dd6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.171825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts" (OuterVolumeSpecName: "scripts") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.182331 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw" (OuterVolumeSpecName: "kube-api-access-c8mkw") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "kube-api-access-c8mkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.182902 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.247592 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.254836 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.255083 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8mkw\" (UniqueName: \"kubernetes.io/projected/2552e567-6df4-445f-9bae-f4718e5c1dd6-kube-api-access-c8mkw\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.255096 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.255105 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.321496 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data" (OuterVolumeSpecName: "config-data") pod "2552e567-6df4-445f-9bae-f4718e5c1dd6" (UID: "2552e567-6df4-445f-9bae-f4718e5c1dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.356940 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2552e567-6df4-445f-9bae-f4718e5c1dd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.543155 4827 generic.go:334] "Generic (PLEG): container finished" podID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerID="33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd" exitCode=0 Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.543197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerDied","Data":"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd"} Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.543226 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2552e567-6df4-445f-9bae-f4718e5c1dd6","Type":"ContainerDied","Data":"3e2e6cc5014b6901d8969db68db6e3525989627bb2e1b22cf0785e5710bfea92"} Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.543244 4827 scope.go:117] "RemoveContainer" containerID="30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.543393 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.567526 4827 scope.go:117] "RemoveContainer" containerID="33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.614802 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.618723 4827 scope.go:117] "RemoveContainer" containerID="30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3" Jan 26 10:02:59 crc kubenswrapper[4827]: E0126 10:02:59.622830 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3\": container with ID starting with 30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3 not found: ID does not exist" containerID="30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.622862 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3"} err="failed to get container status \"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3\": rpc error: code = NotFound desc = could not find container \"30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3\": container with ID starting with 30e86a0b3bcb59c7d87af9e64e749466845431efcaff5f9d5d852bc11ed014f3 not found: ID does not exist" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.622881 4827 scope.go:117] "RemoveContainer" containerID="33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd" Jan 26 10:02:59 crc kubenswrapper[4827]: E0126 10:02:59.623729 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd\": container with ID starting with 33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd not found: ID does not exist" containerID="33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.623768 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd"} err="failed to get container status \"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd\": rpc error: code = NotFound desc = could not find container \"33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd\": container with ID starting with 33d334e52c4af9d0ffce2a081b9096f4582166974c79907d13f9642ea019bdbd not found: ID does not exist" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.644707 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.675612 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:59 crc kubenswrapper[4827]: E0126 10:02:59.676198 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="probe" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.676474 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="probe" Jan 26 10:02:59 crc kubenswrapper[4827]: E0126 10:02:59.676726 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="manila-scheduler" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.676801 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="manila-scheduler" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.677117 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="probe" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.677223 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" containerName="manila-scheduler" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.678836 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.682211 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.684975 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.718930 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2552e567-6df4-445f-9bae-f4718e5c1dd6" path="/var/lib/kubelet/pods/2552e567-6df4-445f-9bae-f4718e5c1dd6/volumes" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.868983 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4d9dc34-401b-43d6-97a0-c628eb57f517-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.869318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-scripts\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.869442 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.869538 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.869667 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.869779 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsbc\" (UniqueName: \"kubernetes.io/projected/d4d9dc34-401b-43d6-97a0-c628eb57f517-kube-api-access-vxsbc\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972547 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsbc\" (UniqueName: \"kubernetes.io/projected/d4d9dc34-401b-43d6-97a0-c628eb57f517-kube-api-access-vxsbc\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972852 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4d9dc34-401b-43d6-97a0-c628eb57f517-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972880 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-scripts\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972931 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.972976 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.973036 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4d9dc34-401b-43d6-97a0-c628eb57f517-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.978600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.981311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-config-data\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.982120 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-scripts\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.994380 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9dc34-401b-43d6-97a0-c628eb57f517-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:02:59 crc kubenswrapper[4827]: I0126 10:02:59.997813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsbc\" (UniqueName: \"kubernetes.io/projected/d4d9dc34-401b-43d6-97a0-c628eb57f517-kube-api-access-vxsbc\") pod \"manila-scheduler-0\" (UID: \"d4d9dc34-401b-43d6-97a0-c628eb57f517\") " pod="openstack/manila-scheduler-0" Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.001562 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.553658 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerStarted","Data":"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042"} Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.563786 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788b1f32-de2c-4281-902f-63df02b00cd8","Type":"ContainerStarted","Data":"7d7e62d83175cde777afb1241ff2709375ea5954ddaade4c0f6437193328c5d4"} Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.564180 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.674883 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 10:03:00 crc kubenswrapper[4827]: W0126 10:03:00.677854 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d9dc34_401b_43d6_97a0_c628eb57f517.slice/crio-d82af907b0ecb813c876e1d5c88b27d429b52cbd54989ae315f46f51eae57b3f WatchSource:0}: Error finding container d82af907b0ecb813c876e1d5c88b27d429b52cbd54989ae315f46f51eae57b3f: Status 404 returned error can't find the container with id d82af907b0ecb813c876e1d5c88b27d429b52cbd54989ae315f46f51eae57b3f Jan 26 10:03:00 crc kubenswrapper[4827]: I0126 10:03:00.885317 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.846713753 podStartE2EDuration="9.885300871s" podCreationTimestamp="2026-01-26 10:02:51 +0000 UTC" firstStartedPulling="2026-01-26 10:02:52.389493897 +0000 UTC m=+3401.038165716" lastFinishedPulling="2026-01-26 10:02:59.428081015 +0000 UTC m=+3408.076752834" observedRunningTime="2026-01-26 10:03:00.884036457 +0000 UTC m=+3409.532708276" watchObservedRunningTime="2026-01-26 10:03:00.885300871 +0000 UTC m=+3409.533972690" Jan 26 10:03:01 crc kubenswrapper[4827]: I0126 10:03:01.572732 4827 generic.go:334] "Generic (PLEG): container finished" podID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerID="fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042" exitCode=0 Jan 26 10:03:01 crc kubenswrapper[4827]: I0126 10:03:01.573025 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerDied","Data":"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042"} Jan 26 10:03:01 crc kubenswrapper[4827]: I0126 10:03:01.581052 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4d9dc34-401b-43d6-97a0-c628eb57f517","Type":"ContainerStarted","Data":"9d4b63122b47ab76dc61f23554d20210e62f10708a9821db584f04e5d4cfbb7a"} Jan 26 10:03:01 crc kubenswrapper[4827]: I0126 10:03:01.581147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4d9dc34-401b-43d6-97a0-c628eb57f517","Type":"ContainerStarted","Data":"d82af907b0ecb813c876e1d5c88b27d429b52cbd54989ae315f46f51eae57b3f"} Jan 26 10:03:02 crc kubenswrapper[4827]: I0126 10:03:02.593167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerStarted","Data":"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4"} Jan 26 10:03:02 crc kubenswrapper[4827]: I0126 10:03:02.595903 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4d9dc34-401b-43d6-97a0-c628eb57f517","Type":"ContainerStarted","Data":"a5dd3cb6e1da6e437ab862df955b25690454030f6d11aa4c4c91ce67ec7062fe"} Jan 26 10:03:02 crc kubenswrapper[4827]: I0126 10:03:02.614204 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7mzph" podStartSLOduration=4.003588583 podStartE2EDuration="7.614190375s" podCreationTimestamp="2026-01-26 10:02:55 +0000 UTC" firstStartedPulling="2026-01-26 10:02:58.516396044 +0000 UTC m=+3407.165067863" lastFinishedPulling="2026-01-26 10:03:02.126997846 +0000 UTC m=+3410.775669655" observedRunningTime="2026-01-26 10:03:02.611564423 +0000 UTC m=+3411.260236242" watchObservedRunningTime="2026-01-26 10:03:02.614190375 +0000 UTC m=+3411.262862194" Jan 26 10:03:02 crc kubenswrapper[4827]: I0126 10:03:02.641078 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.641061124 podStartE2EDuration="3.641061124s" podCreationTimestamp="2026-01-26 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:03:02.636280942 +0000 UTC m=+3411.284952761" watchObservedRunningTime="2026-01-26 10:03:02.641061124 +0000 UTC m=+3411.289732943" Jan 26 10:03:03 crc kubenswrapper[4827]: I0126 10:03:03.196565 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.497038 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.497668 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.545001 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.595580 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.642022 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.642847 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="manila-share" containerID="cri-o://9954dc7d592f9f81ed602f5c3abe57c43fa6ef4189ed1a3fa68a08c3e8805c6e" gracePeriod=30 Jan 26 10:03:05 crc kubenswrapper[4827]: I0126 10:03:05.642891 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="probe" containerID="cri-o://745055165f33589488884e1123f7a6af631eae328977347a1dbffa53fa7f842f" gracePeriod=30 Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.019561 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.021324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.116093 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.125606 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.125928 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.126062 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xmr\" (UniqueName: \"kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.197963 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dfbfd7c96-88kv4" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.231540 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.231679 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xmr\" (UniqueName: \"kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.231794 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.231996 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.232238 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.249398 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xmr\" (UniqueName: \"kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr\") pod \"redhat-marketplace-952j6\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.338189 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.657248 4827 generic.go:334] "Generic (PLEG): container finished" podID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerID="745055165f33589488884e1123f7a6af631eae328977347a1dbffa53fa7f842f" exitCode=0 Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.657509 4827 generic.go:334] "Generic (PLEG): container finished" podID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerID="9954dc7d592f9f81ed602f5c3abe57c43fa6ef4189ed1a3fa68a08c3e8805c6e" exitCode=1 Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.657540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerDied","Data":"745055165f33589488884e1123f7a6af631eae328977347a1dbffa53fa7f842f"} Jan 26 10:03:06 crc kubenswrapper[4827]: I0126 10:03:06.657577 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerDied","Data":"9954dc7d592f9f81ed602f5c3abe57c43fa6ef4189ed1a3fa68a08c3e8805c6e"} Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.031699 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:07 crc kubenswrapper[4827]: W0126 10:03:07.036163 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f905e7_2aac_46d0_b0ab_70d44f0ceb09.slice/crio-1c2838c93205d8d83759ac7e9c917fe97e293f52b7e6d5ac85bcc6dfe4036321 WatchSource:0}: Error finding container 1c2838c93205d8d83759ac7e9c917fe97e293f52b7e6d5ac85bcc6dfe4036321: Status 404 returned error can't find the container with id 1c2838c93205d8d83759ac7e9c917fe97e293f52b7e6d5ac85bcc6dfe4036321 Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.042814 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148051 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148349 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148490 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvht5\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148582 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148673 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148769 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.148883 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph\") pod \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\" (UID: \"77321af3-9f6d-4f0f-a89e-b6dba5d0280d\") " Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.150700 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.150711 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.154297 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.155505 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph" (OuterVolumeSpecName: "ceph") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.157312 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts" (OuterVolumeSpecName: "scripts") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.163527 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5" (OuterVolumeSpecName: "kube-api-access-qvht5") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "kube-api-access-qvht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.240917 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251188 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251223 4827 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251238 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251252 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251263 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251275 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvht5\" (UniqueName: \"kubernetes.io/projected/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-kube-api-access-qvht5\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.251286 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.306342 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data" (OuterVolumeSpecName: "config-data") pod "77321af3-9f6d-4f0f-a89e-b6dba5d0280d" (UID: "77321af3-9f6d-4f0f-a89e-b6dba5d0280d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.352460 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77321af3-9f6d-4f0f-a89e-b6dba5d0280d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.666782 4827 generic.go:334] "Generic (PLEG): container finished" podID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerID="4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a" exitCode=0 Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.668017 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerDied","Data":"4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a"} Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.668046 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerStarted","Data":"1c2838c93205d8d83759ac7e9c917fe97e293f52b7e6d5ac85bcc6dfe4036321"} Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.672122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"77321af3-9f6d-4f0f-a89e-b6dba5d0280d","Type":"ContainerDied","Data":"36e8353ad42f5d74d337fb03e146cd96daf7b32a9308c94805cdce49be79028a"} Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.672165 4827 scope.go:117] "RemoveContainer" containerID="745055165f33589488884e1123f7a6af631eae328977347a1dbffa53fa7f842f" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.672278 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.693970 4827 scope.go:117] "RemoveContainer" containerID="9954dc7d592f9f81ed602f5c3abe57c43fa6ef4189ed1a3fa68a08c3e8805c6e" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.730220 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.743760 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.754385 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:07 crc kubenswrapper[4827]: E0126 10:03:07.754860 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="probe" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.754882 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="probe" Jan 26 10:03:07 crc kubenswrapper[4827]: E0126 10:03:07.754911 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="manila-share" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.754920 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="manila-share" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.755115 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="manila-share" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.755149 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" containerName="probe" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.756155 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.758557 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.769411 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.862560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.862631 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-scripts\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.862676 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.862703 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-ceph\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.862970 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.863023 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kfk\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-kube-api-access-c5kfk\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.863061 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.863232 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.965415 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.965498 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-scripts\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.965533 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.965565 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-ceph\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.966715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.966765 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kfk\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-kube-api-access-c5kfk\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.966796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.966900 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.967459 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.967558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.972355 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.973766 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.974342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-scripts\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.975406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.991939 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kfk\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-kube-api-access-c5kfk\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:07 crc kubenswrapper[4827]: I0126 10:03:07.995124 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0-ceph\") pod \"manila-share-share1-0\" (UID: \"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0\") " pod="openstack/manila-share-share1-0" Jan 26 10:03:08 crc kubenswrapper[4827]: I0126 10:03:08.072577 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 10:03:08 crc kubenswrapper[4827]: I0126 10:03:08.681704 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerStarted","Data":"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e"} Jan 26 10:03:08 crc kubenswrapper[4827]: I0126 10:03:08.737597 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.695814 4827 generic.go:334] "Generic (PLEG): container finished" podID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerID="324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e" exitCode=0 Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.696002 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerDied","Data":"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e"} Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.698537 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0","Type":"ContainerStarted","Data":"1f991a9ce14a4f44ef160c06cadbffef53b72450902c6aca92d9f519fc69b224"} Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.698595 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0","Type":"ContainerStarted","Data":"84ef8487c2a8d11bd4e8f62ea38c0c19de6a875238ccb5f6ac817603ceb7db2c"} Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.698609 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0","Type":"ContainerStarted","Data":"53c995be31d204ae2a7b7d92f00fa213462fc384582592f49a0d239f504f7082"} Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.713686 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77321af3-9f6d-4f0f-a89e-b6dba5d0280d" path="/var/lib/kubelet/pods/77321af3-9f6d-4f0f-a89e-b6dba5d0280d/volumes" Jan 26 10:03:09 crc kubenswrapper[4827]: I0126 10:03:09.768836 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.768811227 podStartE2EDuration="2.768811227s" podCreationTimestamp="2026-01-26 10:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:03:09.762509834 +0000 UTC m=+3418.411181653" watchObservedRunningTime="2026-01-26 10:03:09.768811227 +0000 UTC m=+3418.417483046" Jan 26 10:03:10 crc kubenswrapper[4827]: I0126 10:03:10.002069 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 26 10:03:10 crc kubenswrapper[4827]: I0126 10:03:10.708135 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerStarted","Data":"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69"} Jan 26 10:03:10 crc kubenswrapper[4827]: I0126 10:03:10.742618 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-952j6" podStartSLOduration=3.276342031 podStartE2EDuration="5.742599188s" podCreationTimestamp="2026-01-26 10:03:05 +0000 UTC" firstStartedPulling="2026-01-26 10:03:07.668970035 +0000 UTC m=+3416.317641854" lastFinishedPulling="2026-01-26 10:03:10.135227192 +0000 UTC m=+3418.783899011" observedRunningTime="2026-01-26 10:03:10.735814422 +0000 UTC m=+3419.384486251" watchObservedRunningTime="2026-01-26 10:03:10.742599188 +0000 UTC m=+3419.391271007" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.280788 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.281206 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.571025 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649190 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649267 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649317 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649372 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649557 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649601 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd8vf\" (UniqueName: \"kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.649656 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts\") pod \"78869a93-5b51-40d0-9366-a8bada4c394b\" (UID: \"78869a93-5b51-40d0-9366-a8bada4c394b\") " Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.650568 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs" (OuterVolumeSpecName: "logs") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.660894 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.678999 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf" (OuterVolumeSpecName: "kube-api-access-hd8vf") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "kube-api-access-hd8vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.685744 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts" (OuterVolumeSpecName: "scripts") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.686086 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data" (OuterVolumeSpecName: "config-data") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.700363 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.711253 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "78869a93-5b51-40d0-9366-a8bada4c394b" (UID: "78869a93-5b51-40d0-9366-a8bada4c394b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.731969 4827 generic.go:334] "Generic (PLEG): container finished" podID="78869a93-5b51-40d0-9366-a8bada4c394b" containerID="72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425" exitCode=137 Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.732008 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerDied","Data":"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425"} Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.732036 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfbfd7c96-88kv4" event={"ID":"78869a93-5b51-40d0-9366-a8bada4c394b","Type":"ContainerDied","Data":"9b6072dafba467ccc5ffd43b5856090b706822b820dddd244a8683a144da7cd1"} Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.732056 4827 scope.go:117] "RemoveContainer" containerID="72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.732250 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfbfd7c96-88kv4" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752219 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752254 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752265 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752296 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78869a93-5b51-40d0-9366-a8bada4c394b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752306 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78869a93-5b51-40d0-9366-a8bada4c394b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752315 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd8vf\" (UniqueName: \"kubernetes.io/projected/78869a93-5b51-40d0-9366-a8bada4c394b-kube-api-access-hd8vf\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.752323 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78869a93-5b51-40d0-9366-a8bada4c394b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.771571 4827 scope.go:117] "RemoveContainer" containerID="8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.798328 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.811435 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dfbfd7c96-88kv4"] Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.938439 4827 scope.go:117] "RemoveContainer" containerID="72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425" Jan 26 10:03:12 crc kubenswrapper[4827]: E0126 10:03:12.939424 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425\": container with ID starting with 72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425 not found: ID does not exist" containerID="72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.939496 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425"} err="failed to get container status \"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425\": rpc error: code = NotFound desc = could not find container \"72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425\": container with ID starting with 72d33a2855a86ee46e04886c31849016a80f16d9bbb7d73b754c80d4b767e425 not found: ID does not exist" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.939538 4827 scope.go:117] "RemoveContainer" containerID="8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b" Jan 26 10:03:12 crc kubenswrapper[4827]: E0126 10:03:12.940892 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b\": container with ID starting with 8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b not found: ID does not exist" containerID="8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b" Jan 26 10:03:12 crc kubenswrapper[4827]: I0126 10:03:12.940942 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b"} err="failed to get container status \"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b\": rpc error: code = NotFound desc = could not find container \"8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b\": container with ID starting with 8d5f06d738f8647cd2ba32ffa59abe7605850ad6c4efae419c5819f28dcde12b not found: ID does not exist" Jan 26 10:03:13 crc kubenswrapper[4827]: I0126 10:03:13.723985 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" path="/var/lib/kubelet/pods/78869a93-5b51-40d0-9366-a8bada4c394b/volumes" Jan 26 10:03:15 crc kubenswrapper[4827]: I0126 10:03:15.555346 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:15 crc kubenswrapper[4827]: I0126 10:03:15.606044 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:03:15 crc kubenswrapper[4827]: I0126 10:03:15.769049 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7mzph" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="registry-server" containerID="cri-o://979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4" gracePeriod=2 Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.224993 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.318579 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5x6f\" (UniqueName: \"kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f\") pod \"f196cd42-2ae0-461f-b726-44aa244c1b03\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.318665 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities\") pod \"f196cd42-2ae0-461f-b726-44aa244c1b03\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.318738 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content\") pod \"f196cd42-2ae0-461f-b726-44aa244c1b03\" (UID: \"f196cd42-2ae0-461f-b726-44aa244c1b03\") " Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.319782 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities" (OuterVolumeSpecName: "utilities") pod "f196cd42-2ae0-461f-b726-44aa244c1b03" (UID: "f196cd42-2ae0-461f-b726-44aa244c1b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.325900 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f" (OuterVolumeSpecName: "kube-api-access-j5x6f") pod "f196cd42-2ae0-461f-b726-44aa244c1b03" (UID: "f196cd42-2ae0-461f-b726-44aa244c1b03"). InnerVolumeSpecName "kube-api-access-j5x6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.339146 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.339188 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.368277 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f196cd42-2ae0-461f-b726-44aa244c1b03" (UID: "f196cd42-2ae0-461f-b726-44aa244c1b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.390532 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.420991 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5x6f\" (UniqueName: \"kubernetes.io/projected/f196cd42-2ae0-461f-b726-44aa244c1b03-kube-api-access-j5x6f\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.421218 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.421279 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f196cd42-2ae0-461f-b726-44aa244c1b03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.778106 4827 generic.go:334] "Generic (PLEG): container finished" podID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerID="979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4" exitCode=0 Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.778155 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerDied","Data":"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4"} Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.778987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mzph" event={"ID":"f196cd42-2ae0-461f-b726-44aa244c1b03","Type":"ContainerDied","Data":"51769922d9058ca24aa209545f8f568d410d90586309c9424acc28fc292c2cd6"} Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.779020 4827 scope.go:117] "RemoveContainer" containerID="979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.778192 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mzph" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.801450 4827 scope.go:117] "RemoveContainer" containerID="fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.831869 4827 scope.go:117] "RemoveContainer" containerID="e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.832033 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.842577 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7mzph"] Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.855265 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.887911 4827 scope.go:117] "RemoveContainer" containerID="979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4" Jan 26 10:03:16 crc kubenswrapper[4827]: E0126 10:03:16.888366 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4\": container with ID starting with 979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4 not found: ID does not exist" containerID="979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.888401 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4"} err="failed to get container status \"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4\": rpc error: code = NotFound desc = could not find container \"979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4\": container with ID starting with 979f71ce3692910951d3fe195fdb9bef2c49ed6c81962294ab441cdfb1e939f4 not found: ID does not exist" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.888425 4827 scope.go:117] "RemoveContainer" containerID="fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042" Jan 26 10:03:16 crc kubenswrapper[4827]: E0126 10:03:16.888924 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042\": container with ID starting with fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042 not found: ID does not exist" containerID="fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.888963 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042"} err="failed to get container status \"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042\": rpc error: code = NotFound desc = could not find container \"fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042\": container with ID starting with fa8192abfaa740af8621d26ba19deefafccd49899525948e09dfacf5cb0f1042 not found: ID does not exist" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.888980 4827 scope.go:117] "RemoveContainer" containerID="e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347" Jan 26 10:03:16 crc kubenswrapper[4827]: E0126 10:03:16.889287 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347\": container with ID starting with e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347 not found: ID does not exist" containerID="e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347" Jan 26 10:03:16 crc kubenswrapper[4827]: I0126 10:03:16.889323 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347"} err="failed to get container status \"e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347\": rpc error: code = NotFound desc = could not find container \"e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347\": container with ID starting with e2c3214a8ca4a5a27e0eeb4feaa96b187324ba5bfcd31a4d6dbf54fc815d2347 not found: ID does not exist" Jan 26 10:03:17 crc kubenswrapper[4827]: I0126 10:03:17.734824 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" path="/var/lib/kubelet/pods/f196cd42-2ae0-461f-b726-44aa244c1b03/volumes" Jan 26 10:03:18 crc kubenswrapper[4827]: I0126 10:03:18.073989 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 26 10:03:18 crc kubenswrapper[4827]: I0126 10:03:18.799487 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:18 crc kubenswrapper[4827]: I0126 10:03:18.806782 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-952j6" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="registry-server" containerID="cri-o://64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69" gracePeriod=2 Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.233935 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.278989 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98xmr\" (UniqueName: \"kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr\") pod \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.279066 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities\") pod \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.279154 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content\") pod \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\" (UID: \"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09\") " Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.285325 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities" (OuterVolumeSpecName: "utilities") pod "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" (UID: "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.304918 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr" (OuterVolumeSpecName: "kube-api-access-98xmr") pod "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" (UID: "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09"). InnerVolumeSpecName "kube-api-access-98xmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.319567 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" (UID: "a6f905e7-2aac-46d0-b0ab-70d44f0ceb09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.380916 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98xmr\" (UniqueName: \"kubernetes.io/projected/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-kube-api-access-98xmr\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.380950 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.380959 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.818681 4827 generic.go:334] "Generic (PLEG): container finished" podID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerID="64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69" exitCode=0 Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.818729 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerDied","Data":"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69"} Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.818757 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-952j6" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.818792 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-952j6" event={"ID":"a6f905e7-2aac-46d0-b0ab-70d44f0ceb09","Type":"ContainerDied","Data":"1c2838c93205d8d83759ac7e9c917fe97e293f52b7e6d5ac85bcc6dfe4036321"} Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.818825 4827 scope.go:117] "RemoveContainer" containerID="64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.852998 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.853254 4827 scope.go:117] "RemoveContainer" containerID="324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.862245 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-952j6"] Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.879270 4827 scope.go:117] "RemoveContainer" containerID="4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.917260 4827 scope.go:117] "RemoveContainer" containerID="64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69" Jan 26 10:03:19 crc kubenswrapper[4827]: E0126 10:03:19.918061 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69\": container with ID starting with 64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69 not found: ID does not exist" containerID="64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.918096 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69"} err="failed to get container status \"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69\": rpc error: code = NotFound desc = could not find container \"64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69\": container with ID starting with 64439c12ac687483c2d2e9787a4481aac111e0e78a00634f35a96df3b28c4c69 not found: ID does not exist" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.918117 4827 scope.go:117] "RemoveContainer" containerID="324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e" Jan 26 10:03:19 crc kubenswrapper[4827]: E0126 10:03:19.918503 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e\": container with ID starting with 324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e not found: ID does not exist" containerID="324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.918532 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e"} err="failed to get container status \"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e\": rpc error: code = NotFound desc = could not find container \"324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e\": container with ID starting with 324ece8704262147bf63c53dbff7aabb135055bc11f4431939cce54fa885157e not found: ID does not exist" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.918553 4827 scope.go:117] "RemoveContainer" containerID="4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a" Jan 26 10:03:19 crc kubenswrapper[4827]: E0126 10:03:19.918891 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a\": container with ID starting with 4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a not found: ID does not exist" containerID="4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a" Jan 26 10:03:19 crc kubenswrapper[4827]: I0126 10:03:19.918936 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a"} err="failed to get container status \"4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a\": rpc error: code = NotFound desc = could not find container \"4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a\": container with ID starting with 4aab7911d4766e8f4f6b2de7cd197f2757c31ee306ff8173e181cadec2a4589a not found: ID does not exist" Jan 26 10:03:21 crc kubenswrapper[4827]: I0126 10:03:21.621852 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 26 10:03:21 crc kubenswrapper[4827]: I0126 10:03:21.737208 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" path="/var/lib/kubelet/pods/a6f905e7-2aac-46d0-b0ab-70d44f0ceb09/volumes" Jan 26 10:03:21 crc kubenswrapper[4827]: I0126 10:03:21.897911 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 10:03:29 crc kubenswrapper[4827]: I0126 10:03:29.716372 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 26 10:03:42 crc kubenswrapper[4827]: I0126 10:03:42.269149 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:03:42 crc kubenswrapper[4827]: I0126 10:03:42.270112 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:03:42 crc kubenswrapper[4827]: I0126 10:03:42.270643 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:03:42 crc kubenswrapper[4827]: I0126 10:03:42.271994 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:03:42 crc kubenswrapper[4827]: I0126 10:03:42.272100 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12" gracePeriod=600 Jan 26 10:03:42 crc kubenswrapper[4827]: E0126 10:03:42.506994 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef39dc20_499c_4665_9555_481361ffe06d.slice/crio-conmon-4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef39dc20_499c_4665_9555_481361ffe06d.slice/crio-4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12.scope\": RecentStats: unable to find data in memory cache]" Jan 26 10:03:43 crc kubenswrapper[4827]: I0126 10:03:43.061420 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12" exitCode=0 Jan 26 10:03:43 crc kubenswrapper[4827]: I0126 10:03:43.061921 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12"} Jan 26 10:03:43 crc kubenswrapper[4827]: I0126 10:03:43.061952 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c"} Jan 26 10:03:43 crc kubenswrapper[4827]: I0126 10:03:43.061972 4827 scope.go:117] "RemoveContainer" containerID="183ceb170e928d637a0fef4208b2b9551fff137cae783b3f70a83d845a1670ef" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.164962 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166208 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="extract-utilities" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166236 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="extract-utilities" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166264 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="extract-content" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166280 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="extract-content" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166296 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166307 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166329 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="extract-content" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166340 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="extract-content" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166359 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166369 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166388 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon-log" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166399 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon-log" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166417 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="extract-utilities" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166429 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="extract-utilities" Jan 26 10:04:38 crc kubenswrapper[4827]: E0126 10:04:38.166444 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166454 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166774 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f196cd42-2ae0-461f-b726-44aa244c1b03" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166807 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon-log" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166825 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="78869a93-5b51-40d0-9366-a8bada4c394b" containerName="horizon" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.166846 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f905e7-2aac-46d0-b0ab-70d44f0ceb09" containerName="registry-server" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.167843 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.210935 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.212914 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.213049 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.214891 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-znm65" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.220499 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.305949 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306371 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306428 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306501 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306609 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306675 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306705 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.306737 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.409745 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.409828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.409879 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.409967 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410089 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410184 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410300 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410467 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410628 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.410872 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.412186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.412521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.420185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.427401 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.429391 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.435548 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.456012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " pod="openstack/tempest-tests-tempest" Jan 26 10:04:38 crc kubenswrapper[4827]: I0126 10:04:38.538203 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 10:04:39 crc kubenswrapper[4827]: I0126 10:04:39.064185 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 10:04:39 crc kubenswrapper[4827]: I0126 10:04:39.081050 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:04:39 crc kubenswrapper[4827]: I0126 10:04:39.643114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3afb0c8-7da0-4f91-a689-921ef566e7a2","Type":"ContainerStarted","Data":"624c761f979e1d068732e32ee9d613da34943c9d9d920d1fae85786ce414a144"} Jan 26 10:05:17 crc kubenswrapper[4827]: E0126 10:05:17.298608 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 10:05:17 crc kubenswrapper[4827]: E0126 10:05:17.300235 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nscb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a3afb0c8-7da0-4f91-a689-921ef566e7a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 10:05:17 crc kubenswrapper[4827]: E0126 10:05:17.301501 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" Jan 26 10:05:18 crc kubenswrapper[4827]: E0126 10:05:18.029295 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" Jan 26 10:05:29 crc kubenswrapper[4827]: I0126 10:05:29.165567 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 10:05:31 crc kubenswrapper[4827]: I0126 10:05:31.189198 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3afb0c8-7da0-4f91-a689-921ef566e7a2","Type":"ContainerStarted","Data":"96717f14d420003389721918ba734702443f03113a1c5e574b04b35348a34fd8"} Jan 26 10:05:42 crc kubenswrapper[4827]: I0126 10:05:42.268777 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:05:42 crc kubenswrapper[4827]: I0126 10:05:42.269416 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:06:12 crc kubenswrapper[4827]: I0126 10:06:12.269060 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:06:12 crc kubenswrapper[4827]: I0126 10:06:12.270423 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.269892 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.272427 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.272814 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.274196 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.274466 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" gracePeriod=600 Jan 26 10:06:42 crc kubenswrapper[4827]: E0126 10:06:42.397873 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.844771 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" exitCode=0 Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.844827 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c"} Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.845106 4827 scope.go:117] "RemoveContainer" containerID="4fe70288b72b5e69e1e57c4003c07a8a5e4f3823ea10365a25b2ddb5b6860b12" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.845959 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:06:42 crc kubenswrapper[4827]: E0126 10:06:42.846270 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:06:42 crc kubenswrapper[4827]: I0126 10:06:42.871994 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=75.792230122 podStartE2EDuration="2m5.871975632s" podCreationTimestamp="2026-01-26 10:04:37 +0000 UTC" firstStartedPulling="2026-01-26 10:04:39.080792196 +0000 UTC m=+3507.729464025" lastFinishedPulling="2026-01-26 10:05:29.160537706 +0000 UTC m=+3557.809209535" observedRunningTime="2026-01-26 10:05:31.216954406 +0000 UTC m=+3559.865626235" watchObservedRunningTime="2026-01-26 10:06:42.871975632 +0000 UTC m=+3631.520647451" Jan 26 10:06:56 crc kubenswrapper[4827]: I0126 10:06:56.703813 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:06:56 crc kubenswrapper[4827]: E0126 10:06:56.704789 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:07:07 crc kubenswrapper[4827]: I0126 10:07:07.703452 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:07:07 crc kubenswrapper[4827]: E0126 10:07:07.704544 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:07:18 crc kubenswrapper[4827]: I0126 10:07:18.703282 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:07:18 crc kubenswrapper[4827]: E0126 10:07:18.705100 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:07:29 crc kubenswrapper[4827]: I0126 10:07:29.703412 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:07:29 crc kubenswrapper[4827]: E0126 10:07:29.704334 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:07:41 crc kubenswrapper[4827]: I0126 10:07:41.710342 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:07:41 crc kubenswrapper[4827]: E0126 10:07:41.711232 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:07:53 crc kubenswrapper[4827]: I0126 10:07:53.702772 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:07:53 crc kubenswrapper[4827]: E0126 10:07:53.703555 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:08:06 crc kubenswrapper[4827]: I0126 10:08:06.703902 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:08:06 crc kubenswrapper[4827]: E0126 10:08:06.704569 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:08:19 crc kubenswrapper[4827]: I0126 10:08:19.703041 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:08:19 crc kubenswrapper[4827]: E0126 10:08:19.704024 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.040294 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.042515 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.074014 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.147523 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.147677 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.147711 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxwk\" (UniqueName: \"kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.249791 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.249886 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.249921 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxwk\" (UniqueName: \"kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.250485 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.250677 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.273494 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxwk\" (UniqueName: \"kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk\") pod \"community-operators-2dhbf\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:23 crc kubenswrapper[4827]: I0126 10:08:23.362114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:24 crc kubenswrapper[4827]: I0126 10:08:24.016674 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:24 crc kubenswrapper[4827]: I0126 10:08:24.865573 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerID="0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5" exitCode=0 Jan 26 10:08:24 crc kubenswrapper[4827]: I0126 10:08:24.865682 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerDied","Data":"0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5"} Jan 26 10:08:24 crc kubenswrapper[4827]: I0126 10:08:24.865957 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerStarted","Data":"131ecbc6a89342f2aecc5bbf659289720e1621bc90b7f3cef595476a2a4e36c2"} Jan 26 10:08:25 crc kubenswrapper[4827]: I0126 10:08:25.875760 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerStarted","Data":"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3"} Jan 26 10:08:26 crc kubenswrapper[4827]: I0126 10:08:26.902580 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerID="1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3" exitCode=0 Jan 26 10:08:26 crc kubenswrapper[4827]: I0126 10:08:26.902657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerDied","Data":"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3"} Jan 26 10:08:29 crc kubenswrapper[4827]: I0126 10:08:29.927567 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerStarted","Data":"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c"} Jan 26 10:08:30 crc kubenswrapper[4827]: I0126 10:08:30.974670 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dhbf" podStartSLOduration=5.156341514 podStartE2EDuration="7.974625448s" podCreationTimestamp="2026-01-26 10:08:23 +0000 UTC" firstStartedPulling="2026-01-26 10:08:24.867099087 +0000 UTC m=+3733.515770906" lastFinishedPulling="2026-01-26 10:08:27.685383021 +0000 UTC m=+3736.334054840" observedRunningTime="2026-01-26 10:08:30.964913477 +0000 UTC m=+3739.613585296" watchObservedRunningTime="2026-01-26 10:08:30.974625448 +0000 UTC m=+3739.623297267" Jan 26 10:08:33 crc kubenswrapper[4827]: I0126 10:08:33.363799 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:33 crc kubenswrapper[4827]: I0126 10:08:33.364598 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:33 crc kubenswrapper[4827]: I0126 10:08:33.413405 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:34 crc kubenswrapper[4827]: I0126 10:08:34.702913 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:08:34 crc kubenswrapper[4827]: E0126 10:08:34.703719 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:08:43 crc kubenswrapper[4827]: I0126 10:08:43.414377 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:43 crc kubenswrapper[4827]: I0126 10:08:43.466324 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.084906 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dhbf" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="registry-server" containerID="cri-o://bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c" gracePeriod=2 Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.593356 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.699628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities\") pod \"7d8ee04c-7fb2-4e21-b7de-93278714912d\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.699858 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxwk\" (UniqueName: \"kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk\") pod \"7d8ee04c-7fb2-4e21-b7de-93278714912d\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.699981 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content\") pod \"7d8ee04c-7fb2-4e21-b7de-93278714912d\" (UID: \"7d8ee04c-7fb2-4e21-b7de-93278714912d\") " Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.700292 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities" (OuterVolumeSpecName: "utilities") pod "7d8ee04c-7fb2-4e21-b7de-93278714912d" (UID: "7d8ee04c-7fb2-4e21-b7de-93278714912d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.700593 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.715147 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk" (OuterVolumeSpecName: "kube-api-access-bhxwk") pod "7d8ee04c-7fb2-4e21-b7de-93278714912d" (UID: "7d8ee04c-7fb2-4e21-b7de-93278714912d"). InnerVolumeSpecName "kube-api-access-bhxwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.769873 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d8ee04c-7fb2-4e21-b7de-93278714912d" (UID: "7d8ee04c-7fb2-4e21-b7de-93278714912d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.801873 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxwk\" (UniqueName: \"kubernetes.io/projected/7d8ee04c-7fb2-4e21-b7de-93278714912d-kube-api-access-bhxwk\") on node \"crc\" DevicePath \"\"" Jan 26 10:08:44 crc kubenswrapper[4827]: I0126 10:08:44.802085 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d8ee04c-7fb2-4e21-b7de-93278714912d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.094384 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerID="bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c" exitCode=0 Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.094472 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dhbf" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.094470 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerDied","Data":"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c"} Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.095003 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dhbf" event={"ID":"7d8ee04c-7fb2-4e21-b7de-93278714912d","Type":"ContainerDied","Data":"131ecbc6a89342f2aecc5bbf659289720e1621bc90b7f3cef595476a2a4e36c2"} Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.095055 4827 scope.go:117] "RemoveContainer" containerID="bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.129987 4827 scope.go:117] "RemoveContainer" containerID="1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.162871 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.175586 4827 scope.go:117] "RemoveContainer" containerID="0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.181180 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dhbf"] Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.209604 4827 scope.go:117] "RemoveContainer" containerID="bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c" Jan 26 10:08:45 crc kubenswrapper[4827]: E0126 10:08:45.210181 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c\": container with ID starting with bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c not found: ID does not exist" containerID="bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.210259 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c"} err="failed to get container status \"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c\": rpc error: code = NotFound desc = could not find container \"bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c\": container with ID starting with bd963acbd359d157903374d9edd65df6a6dc9c4d6da07a262fa6e4791e3f0f6c not found: ID does not exist" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.210305 4827 scope.go:117] "RemoveContainer" containerID="1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3" Jan 26 10:08:45 crc kubenswrapper[4827]: E0126 10:08:45.211871 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3\": container with ID starting with 1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3 not found: ID does not exist" containerID="1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.211907 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3"} err="failed to get container status \"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3\": rpc error: code = NotFound desc = could not find container \"1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3\": container with ID starting with 1e1f8e9828ef2c94ed8674b13da600c958010c0a364cdfebb86bd307ad9e5bb3 not found: ID does not exist" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.211934 4827 scope.go:117] "RemoveContainer" containerID="0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5" Jan 26 10:08:45 crc kubenswrapper[4827]: E0126 10:08:45.212550 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5\": container with ID starting with 0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5 not found: ID does not exist" containerID="0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.212592 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5"} err="failed to get container status \"0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5\": rpc error: code = NotFound desc = could not find container \"0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5\": container with ID starting with 0811753f39763cad17ad3f1d1c66ba5f3c808011f2f573919d0b28a7a40a86e5 not found: ID does not exist" Jan 26 10:08:45 crc kubenswrapper[4827]: I0126 10:08:45.713925 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" path="/var/lib/kubelet/pods/7d8ee04c-7fb2-4e21-b7de-93278714912d/volumes" Jan 26 10:08:47 crc kubenswrapper[4827]: I0126 10:08:47.703546 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:08:47 crc kubenswrapper[4827]: E0126 10:08:47.704702 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:09:01 crc kubenswrapper[4827]: I0126 10:09:01.712245 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:09:01 crc kubenswrapper[4827]: E0126 10:09:01.713404 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:09:14 crc kubenswrapper[4827]: I0126 10:09:14.703279 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:09:14 crc kubenswrapper[4827]: E0126 10:09:14.704428 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:09:29 crc kubenswrapper[4827]: I0126 10:09:29.702653 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:09:29 crc kubenswrapper[4827]: E0126 10:09:29.703392 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:09:40 crc kubenswrapper[4827]: I0126 10:09:40.703704 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:09:40 crc kubenswrapper[4827]: E0126 10:09:40.704627 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:09:51 crc kubenswrapper[4827]: I0126 10:09:51.710227 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:09:51 crc kubenswrapper[4827]: E0126 10:09:51.711087 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:10:05 crc kubenswrapper[4827]: I0126 10:10:05.702837 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:10:05 crc kubenswrapper[4827]: E0126 10:10:05.703577 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:10:19 crc kubenswrapper[4827]: I0126 10:10:19.704033 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:10:19 crc kubenswrapper[4827]: E0126 10:10:19.705034 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:10:32 crc kubenswrapper[4827]: I0126 10:10:32.703462 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:10:32 crc kubenswrapper[4827]: E0126 10:10:32.704573 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:10:46 crc kubenswrapper[4827]: I0126 10:10:46.704041 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:10:46 crc kubenswrapper[4827]: E0126 10:10:46.705538 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.146338 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:10:53 crc kubenswrapper[4827]: E0126 10:10:53.148560 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="extract-utilities" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.148576 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="extract-utilities" Jan 26 10:10:53 crc kubenswrapper[4827]: E0126 10:10:53.148600 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="extract-content" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.148608 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="extract-content" Jan 26 10:10:53 crc kubenswrapper[4827]: E0126 10:10:53.148662 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="registry-server" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.148676 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="registry-server" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.148877 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8ee04c-7fb2-4e21-b7de-93278714912d" containerName="registry-server" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.150919 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.167472 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.250233 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.250302 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.250443 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbpx\" (UniqueName: \"kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.351896 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.351979 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.352071 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkbpx\" (UniqueName: \"kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.352484 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.352516 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.371027 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkbpx\" (UniqueName: \"kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx\") pod \"redhat-operators-x5qrq\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:53 crc kubenswrapper[4827]: I0126 10:10:53.475214 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:10:54 crc kubenswrapper[4827]: I0126 10:10:54.018308 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:10:54 crc kubenswrapper[4827]: W0126 10:10:54.021598 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555f6eb0_a321_45bf_8aff_0afc97a22060.slice/crio-39b84b716532eb862d2d8d85d333db1278befec6f6d3f53976879b45a4ada658 WatchSource:0}: Error finding container 39b84b716532eb862d2d8d85d333db1278befec6f6d3f53976879b45a4ada658: Status 404 returned error can't find the container with id 39b84b716532eb862d2d8d85d333db1278befec6f6d3f53976879b45a4ada658 Jan 26 10:10:54 crc kubenswrapper[4827]: I0126 10:10:54.386262 4827 generic.go:334] "Generic (PLEG): container finished" podID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerID="c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b" exitCode=0 Jan 26 10:10:54 crc kubenswrapper[4827]: I0126 10:10:54.386323 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerDied","Data":"c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b"} Jan 26 10:10:54 crc kubenswrapper[4827]: I0126 10:10:54.386607 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerStarted","Data":"39b84b716532eb862d2d8d85d333db1278befec6f6d3f53976879b45a4ada658"} Jan 26 10:10:54 crc kubenswrapper[4827]: I0126 10:10:54.388537 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:10:55 crc kubenswrapper[4827]: I0126 10:10:55.395301 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerStarted","Data":"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b"} Jan 26 10:10:59 crc kubenswrapper[4827]: I0126 10:10:59.428779 4827 generic.go:334] "Generic (PLEG): container finished" podID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerID="9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b" exitCode=0 Jan 26 10:10:59 crc kubenswrapper[4827]: I0126 10:10:59.428855 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerDied","Data":"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b"} Jan 26 10:11:00 crc kubenswrapper[4827]: I0126 10:11:00.443411 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerStarted","Data":"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c"} Jan 26 10:11:00 crc kubenswrapper[4827]: I0126 10:11:00.478339 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x5qrq" podStartSLOduration=2.027749217 podStartE2EDuration="7.47832107s" podCreationTimestamp="2026-01-26 10:10:53 +0000 UTC" firstStartedPulling="2026-01-26 10:10:54.388315011 +0000 UTC m=+3883.036986830" lastFinishedPulling="2026-01-26 10:10:59.838886864 +0000 UTC m=+3888.487558683" observedRunningTime="2026-01-26 10:11:00.475520714 +0000 UTC m=+3889.124192533" watchObservedRunningTime="2026-01-26 10:11:00.47832107 +0000 UTC m=+3889.126992879" Jan 26 10:11:01 crc kubenswrapper[4827]: I0126 10:11:01.704591 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:11:01 crc kubenswrapper[4827]: E0126 10:11:01.705304 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:11:03 crc kubenswrapper[4827]: I0126 10:11:03.476331 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:03 crc kubenswrapper[4827]: I0126 10:11:03.476721 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:04 crc kubenswrapper[4827]: I0126 10:11:04.535172 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x5qrq" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" probeResult="failure" output=< Jan 26 10:11:04 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:11:04 crc kubenswrapper[4827]: > Jan 26 10:11:14 crc kubenswrapper[4827]: I0126 10:11:14.576727 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x5qrq" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" probeResult="failure" output=< Jan 26 10:11:14 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:11:14 crc kubenswrapper[4827]: > Jan 26 10:11:15 crc kubenswrapper[4827]: I0126 10:11:15.703489 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:11:15 crc kubenswrapper[4827]: E0126 10:11:15.703737 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:11:23 crc kubenswrapper[4827]: I0126 10:11:23.531224 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:23 crc kubenswrapper[4827]: I0126 10:11:23.577404 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:24 crc kubenswrapper[4827]: I0126 10:11:24.343898 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:11:24 crc kubenswrapper[4827]: I0126 10:11:24.663303 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x5qrq" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" containerID="cri-o://f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c" gracePeriod=2 Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.199030 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.331479 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities\") pod \"555f6eb0-a321-45bf-8aff-0afc97a22060\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.331662 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkbpx\" (UniqueName: \"kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx\") pod \"555f6eb0-a321-45bf-8aff-0afc97a22060\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.331770 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content\") pod \"555f6eb0-a321-45bf-8aff-0afc97a22060\" (UID: \"555f6eb0-a321-45bf-8aff-0afc97a22060\") " Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.332017 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities" (OuterVolumeSpecName: "utilities") pod "555f6eb0-a321-45bf-8aff-0afc97a22060" (UID: "555f6eb0-a321-45bf-8aff-0afc97a22060"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.332314 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.344697 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx" (OuterVolumeSpecName: "kube-api-access-lkbpx") pod "555f6eb0-a321-45bf-8aff-0afc97a22060" (UID: "555f6eb0-a321-45bf-8aff-0afc97a22060"). InnerVolumeSpecName "kube-api-access-lkbpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.435797 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkbpx\" (UniqueName: \"kubernetes.io/projected/555f6eb0-a321-45bf-8aff-0afc97a22060-kube-api-access-lkbpx\") on node \"crc\" DevicePath \"\"" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.500372 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "555f6eb0-a321-45bf-8aff-0afc97a22060" (UID: "555f6eb0-a321-45bf-8aff-0afc97a22060"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.538011 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555f6eb0-a321-45bf-8aff-0afc97a22060-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.684573 4827 generic.go:334] "Generic (PLEG): container finished" podID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerID="f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c" exitCode=0 Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.684628 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerDied","Data":"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c"} Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.684678 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x5qrq" event={"ID":"555f6eb0-a321-45bf-8aff-0afc97a22060","Type":"ContainerDied","Data":"39b84b716532eb862d2d8d85d333db1278befec6f6d3f53976879b45a4ada658"} Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.684716 4827 scope.go:117] "RemoveContainer" containerID="f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.686314 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x5qrq" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.716227 4827 scope.go:117] "RemoveContainer" containerID="9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.754150 4827 scope.go:117] "RemoveContainer" containerID="c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.767034 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.777328 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x5qrq"] Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.824607 4827 scope.go:117] "RemoveContainer" containerID="f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c" Jan 26 10:11:25 crc kubenswrapper[4827]: E0126 10:11:25.824956 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c\": container with ID starting with f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c not found: ID does not exist" containerID="f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.825001 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c"} err="failed to get container status \"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c\": rpc error: code = NotFound desc = could not find container \"f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c\": container with ID starting with f7dda32721534d2e3b2efc2fd2cf406874899d5feae6eaae6cd674f943250e4c not found: ID does not exist" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.825021 4827 scope.go:117] "RemoveContainer" containerID="9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b" Jan 26 10:11:25 crc kubenswrapper[4827]: E0126 10:11:25.825381 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b\": container with ID starting with 9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b not found: ID does not exist" containerID="9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.825399 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b"} err="failed to get container status \"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b\": rpc error: code = NotFound desc = could not find container \"9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b\": container with ID starting with 9f4953065e4cfe8e1557032fe559a38b385e8f7124b8058c1c41feb017e5ff3b not found: ID does not exist" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.825427 4827 scope.go:117] "RemoveContainer" containerID="c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b" Jan 26 10:11:25 crc kubenswrapper[4827]: E0126 10:11:25.825810 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b\": container with ID starting with c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b not found: ID does not exist" containerID="c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b" Jan 26 10:11:25 crc kubenswrapper[4827]: I0126 10:11:25.825848 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b"} err="failed to get container status \"c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b\": rpc error: code = NotFound desc = could not find container \"c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b\": container with ID starting with c7469721aaeefbcd9c4cbb4a80336abfda881499b1583649dcec349b3475f44b not found: ID does not exist" Jan 26 10:11:26 crc kubenswrapper[4827]: I0126 10:11:26.703428 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:11:26 crc kubenswrapper[4827]: E0126 10:11:26.703920 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:11:27 crc kubenswrapper[4827]: I0126 10:11:27.715787 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" path="/var/lib/kubelet/pods/555f6eb0-a321-45bf-8aff-0afc97a22060/volumes" Jan 26 10:11:40 crc kubenswrapper[4827]: I0126 10:11:40.704066 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:11:40 crc kubenswrapper[4827]: E0126 10:11:40.704610 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:11:48 crc kubenswrapper[4827]: I0126 10:11:48.161294 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-jnl6t"] Jan 26 10:11:48 crc kubenswrapper[4827]: I0126 10:11:48.172946 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-jnl6t"] Jan 26 10:11:49 crc kubenswrapper[4827]: I0126 10:11:49.048770 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-c5f9-account-create-update-mmbpl"] Jan 26 10:11:49 crc kubenswrapper[4827]: I0126 10:11:49.060043 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-c5f9-account-create-update-mmbpl"] Jan 26 10:11:49 crc kubenswrapper[4827]: I0126 10:11:49.722373 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079e6508-8d31-439e-bc59-8fededfa8371" path="/var/lib/kubelet/pods/079e6508-8d31-439e-bc59-8fededfa8371/volumes" Jan 26 10:11:49 crc kubenswrapper[4827]: I0126 10:11:49.723988 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660be36b-a175-4e62-a15e-ddb67cb009cb" path="/var/lib/kubelet/pods/660be36b-a175-4e62-a15e-ddb67cb009cb/volumes" Jan 26 10:11:53 crc kubenswrapper[4827]: I0126 10:11:53.702816 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:11:55 crc kubenswrapper[4827]: I0126 10:11:55.160103 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece"} Jan 26 10:12:19 crc kubenswrapper[4827]: I0126 10:12:19.062826 4827 scope.go:117] "RemoveContainer" containerID="a4193fd343ff67f6875461b53393c1003031936705d94b183106411e8ffa2544" Jan 26 10:12:19 crc kubenswrapper[4827]: I0126 10:12:19.097260 4827 scope.go:117] "RemoveContainer" containerID="b017b06a7992110fe3719589c88d8b7ac3f8a9c2c668797a6bc2a63e75474818" Jan 26 10:12:33 crc kubenswrapper[4827]: I0126 10:12:33.089545 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-fdw9j"] Jan 26 10:12:33 crc kubenswrapper[4827]: I0126 10:12:33.106412 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-fdw9j"] Jan 26 10:12:33 crc kubenswrapper[4827]: I0126 10:12:33.714997 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5324cf16-f36a-4a9c-8b04-3646cab28702" path="/var/lib/kubelet/pods/5324cf16-f36a-4a9c-8b04-3646cab28702/volumes" Jan 26 10:13:19 crc kubenswrapper[4827]: I0126 10:13:19.238916 4827 scope.go:117] "RemoveContainer" containerID="6fc23479ae0c65ff87718b9cb3e239580513082941d6c825193f3380058317ba" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.822975 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:25 crc kubenswrapper[4827]: E0126 10:13:25.824953 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.825059 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" Jan 26 10:13:25 crc kubenswrapper[4827]: E0126 10:13:25.825151 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="extract-content" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.825217 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="extract-content" Jan 26 10:13:25 crc kubenswrapper[4827]: E0126 10:13:25.825290 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="extract-utilities" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.825354 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="extract-utilities" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.827798 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="555f6eb0-a321-45bf-8aff-0afc97a22060" containerName="registry-server" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.829352 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.849124 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.918480 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.918550 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlj24\" (UniqueName: \"kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:25 crc kubenswrapper[4827]: I0126 10:13:25.918589 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.020586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.020776 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlj24\" (UniqueName: \"kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.020827 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.021149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.021308 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.052704 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlj24\" (UniqueName: \"kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24\") pod \"redhat-marketplace-zcvrs\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.155721 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.852502 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:26 crc kubenswrapper[4827]: I0126 10:13:26.967141 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerStarted","Data":"94c20e86c747e161c272fe9a29689935a74618c1ea7dea769b58947d104548de"} Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.007060 4827 generic.go:334] "Generic (PLEG): container finished" podID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerID="f17bb2ef34b129228e8b2d5c194b1bac7da1e73b2bf113e79f940f05ba7e438a" exitCode=0 Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.008390 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerDied","Data":"f17bb2ef34b129228e8b2d5c194b1bac7da1e73b2bf113e79f940f05ba7e438a"} Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.030143 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.032134 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.046730 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.073297 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.073491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.074264 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f678j\" (UniqueName: \"kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.176572 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.176674 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f678j\" (UniqueName: \"kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.176792 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.177107 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.177283 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.610825 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f678j\" (UniqueName: \"kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j\") pod \"certified-operators-n9c5m\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:28 crc kubenswrapper[4827]: I0126 10:13:28.651357 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:29 crc kubenswrapper[4827]: I0126 10:13:29.184681 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:29 crc kubenswrapper[4827]: W0126 10:13:29.198107 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d7b1c8a_3637_442f_90d5_8b318761b368.slice/crio-440402a3fa4e1bb1a95c297971ab1bd045d186ab3512e4bdf170564b6d8d4336 WatchSource:0}: Error finding container 440402a3fa4e1bb1a95c297971ab1bd045d186ab3512e4bdf170564b6d8d4336: Status 404 returned error can't find the container with id 440402a3fa4e1bb1a95c297971ab1bd045d186ab3512e4bdf170564b6d8d4336 Jan 26 10:13:30 crc kubenswrapper[4827]: I0126 10:13:30.030323 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerID="fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba" exitCode=0 Jan 26 10:13:30 crc kubenswrapper[4827]: I0126 10:13:30.030421 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerDied","Data":"fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba"} Jan 26 10:13:30 crc kubenswrapper[4827]: I0126 10:13:30.031104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerStarted","Data":"440402a3fa4e1bb1a95c297971ab1bd045d186ab3512e4bdf170564b6d8d4336"} Jan 26 10:13:30 crc kubenswrapper[4827]: I0126 10:13:30.038528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerStarted","Data":"8d1f11a51c2198b00aaa0bd4434efc3b08586516a7eacd900b4964f9d0e500fb"} Jan 26 10:13:31 crc kubenswrapper[4827]: I0126 10:13:31.051061 4827 generic.go:334] "Generic (PLEG): container finished" podID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerID="8d1f11a51c2198b00aaa0bd4434efc3b08586516a7eacd900b4964f9d0e500fb" exitCode=0 Jan 26 10:13:31 crc kubenswrapper[4827]: I0126 10:13:31.051183 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerDied","Data":"8d1f11a51c2198b00aaa0bd4434efc3b08586516a7eacd900b4964f9d0e500fb"} Jan 26 10:13:32 crc kubenswrapper[4827]: I0126 10:13:32.060796 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerStarted","Data":"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e"} Jan 26 10:13:35 crc kubenswrapper[4827]: I0126 10:13:35.086140 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerID="61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e" exitCode=0 Jan 26 10:13:35 crc kubenswrapper[4827]: I0126 10:13:35.086222 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerDied","Data":"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e"} Jan 26 10:13:35 crc kubenswrapper[4827]: I0126 10:13:35.089112 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerStarted","Data":"80929512c9a8cde369316ca9628747adb004d0e27032c9ae39b4795051c6190b"} Jan 26 10:13:35 crc kubenswrapper[4827]: I0126 10:13:35.145784 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcvrs" podStartSLOduration=3.464088417 podStartE2EDuration="10.14576723s" podCreationTimestamp="2026-01-26 10:13:25 +0000 UTC" firstStartedPulling="2026-01-26 10:13:28.018309149 +0000 UTC m=+4036.666980968" lastFinishedPulling="2026-01-26 10:13:34.699987962 +0000 UTC m=+4043.348659781" observedRunningTime="2026-01-26 10:13:35.142490575 +0000 UTC m=+4043.791162394" watchObservedRunningTime="2026-01-26 10:13:35.14576723 +0000 UTC m=+4043.794439049" Jan 26 10:13:36 crc kubenswrapper[4827]: I0126 10:13:36.102087 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerStarted","Data":"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3"} Jan 26 10:13:36 crc kubenswrapper[4827]: I0126 10:13:36.133252 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n9c5m" podStartSLOduration=2.644579867 podStartE2EDuration="8.133227713s" podCreationTimestamp="2026-01-26 10:13:28 +0000 UTC" firstStartedPulling="2026-01-26 10:13:30.035187006 +0000 UTC m=+4038.683858845" lastFinishedPulling="2026-01-26 10:13:35.523834872 +0000 UTC m=+4044.172506691" observedRunningTime="2026-01-26 10:13:36.123124669 +0000 UTC m=+4044.771796498" watchObservedRunningTime="2026-01-26 10:13:36.133227713 +0000 UTC m=+4044.781899572" Jan 26 10:13:36 crc kubenswrapper[4827]: I0126 10:13:36.156727 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:36 crc kubenswrapper[4827]: I0126 10:13:36.156865 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:37 crc kubenswrapper[4827]: I0126 10:13:37.202537 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zcvrs" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="registry-server" probeResult="failure" output=< Jan 26 10:13:37 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:13:37 crc kubenswrapper[4827]: > Jan 26 10:13:38 crc kubenswrapper[4827]: I0126 10:13:38.652477 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:38 crc kubenswrapper[4827]: I0126 10:13:38.653992 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:39 crc kubenswrapper[4827]: I0126 10:13:39.956770 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-n9c5m" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="registry-server" probeResult="failure" output=< Jan 26 10:13:39 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:13:39 crc kubenswrapper[4827]: > Jan 26 10:13:46 crc kubenswrapper[4827]: I0126 10:13:46.265149 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:46 crc kubenswrapper[4827]: I0126 10:13:46.331294 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:46 crc kubenswrapper[4827]: I0126 10:13:46.517911 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:47 crc kubenswrapper[4827]: I0126 10:13:47.500777 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcvrs" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="registry-server" containerID="cri-o://80929512c9a8cde369316ca9628747adb004d0e27032c9ae39b4795051c6190b" gracePeriod=2 Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.509703 4827 generic.go:334] "Generic (PLEG): container finished" podID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerID="80929512c9a8cde369316ca9628747adb004d0e27032c9ae39b4795051c6190b" exitCode=0 Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.509743 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerDied","Data":"80929512c9a8cde369316ca9628747adb004d0e27032c9ae39b4795051c6190b"} Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.509768 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrs" event={"ID":"12e97677-8f9d-4bfe-9cc6-5661667a20e8","Type":"ContainerDied","Data":"94c20e86c747e161c272fe9a29689935a74618c1ea7dea769b58947d104548de"} Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.509778 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c20e86c747e161c272fe9a29689935a74618c1ea7dea769b58947d104548de" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.558823 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.657969 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content\") pod \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.658111 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities\") pod \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.658248 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlj24\" (UniqueName: \"kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24\") pod \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\" (UID: \"12e97677-8f9d-4bfe-9cc6-5661667a20e8\") " Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.658719 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities" (OuterVolumeSpecName: "utilities") pod "12e97677-8f9d-4bfe-9cc6-5661667a20e8" (UID: "12e97677-8f9d-4bfe-9cc6-5661667a20e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.658912 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.666341 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24" (OuterVolumeSpecName: "kube-api-access-wlj24") pod "12e97677-8f9d-4bfe-9cc6-5661667a20e8" (UID: "12e97677-8f9d-4bfe-9cc6-5661667a20e8"). InnerVolumeSpecName "kube-api-access-wlj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.686974 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12e97677-8f9d-4bfe-9cc6-5661667a20e8" (UID: "12e97677-8f9d-4bfe-9cc6-5661667a20e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.700318 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.746276 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.761038 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlj24\" (UniqueName: \"kubernetes.io/projected/12e97677-8f9d-4bfe-9cc6-5661667a20e8-kube-api-access-wlj24\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:48 crc kubenswrapper[4827]: I0126 10:13:48.761225 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e97677-8f9d-4bfe-9cc6-5661667a20e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:49 crc kubenswrapper[4827]: I0126 10:13:49.527490 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrs" Jan 26 10:13:49 crc kubenswrapper[4827]: I0126 10:13:49.593220 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:49 crc kubenswrapper[4827]: I0126 10:13:49.612751 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrs"] Jan 26 10:13:49 crc kubenswrapper[4827]: I0126 10:13:49.718266 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" path="/var/lib/kubelet/pods/12e97677-8f9d-4bfe-9cc6-5661667a20e8/volumes" Jan 26 10:13:50 crc kubenswrapper[4827]: I0126 10:13:50.118312 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:50 crc kubenswrapper[4827]: I0126 10:13:50.537357 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n9c5m" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="registry-server" containerID="cri-o://998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3" gracePeriod=2 Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.159782 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.321940 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content\") pod \"7d7b1c8a-3637-442f-90d5-8b318761b368\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.322143 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities\") pod \"7d7b1c8a-3637-442f-90d5-8b318761b368\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.322219 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f678j\" (UniqueName: \"kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j\") pod \"7d7b1c8a-3637-442f-90d5-8b318761b368\" (UID: \"7d7b1c8a-3637-442f-90d5-8b318761b368\") " Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.322693 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities" (OuterVolumeSpecName: "utilities") pod "7d7b1c8a-3637-442f-90d5-8b318761b368" (UID: "7d7b1c8a-3637-442f-90d5-8b318761b368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.370530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d7b1c8a-3637-442f-90d5-8b318761b368" (UID: "7d7b1c8a-3637-442f-90d5-8b318761b368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.424556 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.424617 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7b1c8a-3637-442f-90d5-8b318761b368-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.558448 4827 generic.go:334] "Generic (PLEG): container finished" podID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerID="998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3" exitCode=0 Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.558502 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerDied","Data":"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3"} Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.558536 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9c5m" event={"ID":"7d7b1c8a-3637-442f-90d5-8b318761b368","Type":"ContainerDied","Data":"440402a3fa4e1bb1a95c297971ab1bd045d186ab3512e4bdf170564b6d8d4336"} Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.558579 4827 scope.go:117] "RemoveContainer" containerID="998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.559163 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9c5m" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.596181 4827 scope.go:117] "RemoveContainer" containerID="61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.906488 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j" (OuterVolumeSpecName: "kube-api-access-f678j") pod "7d7b1c8a-3637-442f-90d5-8b318761b368" (UID: "7d7b1c8a-3637-442f-90d5-8b318761b368"). InnerVolumeSpecName "kube-api-access-f678j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.925791 4827 scope.go:117] "RemoveContainer" containerID="fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba" Jan 26 10:13:51 crc kubenswrapper[4827]: I0126 10:13:51.935750 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f678j\" (UniqueName: \"kubernetes.io/projected/7d7b1c8a-3637-442f-90d5-8b318761b368-kube-api-access-f678j\") on node \"crc\" DevicePath \"\"" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.114681 4827 scope.go:117] "RemoveContainer" containerID="998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3" Jan 26 10:13:52 crc kubenswrapper[4827]: E0126 10:13:52.115238 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3\": container with ID starting with 998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3 not found: ID does not exist" containerID="998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.115341 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3"} err="failed to get container status \"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3\": rpc error: code = NotFound desc = could not find container \"998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3\": container with ID starting with 998057c2cc83487e6f90ff97ba7cc605a896b910fe3c9f00a8a3ea2911925bc3 not found: ID does not exist" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.115435 4827 scope.go:117] "RemoveContainer" containerID="61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e" Jan 26 10:13:52 crc kubenswrapper[4827]: E0126 10:13:52.115831 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e\": container with ID starting with 61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e not found: ID does not exist" containerID="61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.115916 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e"} err="failed to get container status \"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e\": rpc error: code = NotFound desc = could not find container \"61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e\": container with ID starting with 61f676df64db425fa857304fc24b980e3b05a8e2d2ac0dd24885d55a0fc0373e not found: ID does not exist" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.115984 4827 scope.go:117] "RemoveContainer" containerID="fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba" Jan 26 10:13:52 crc kubenswrapper[4827]: E0126 10:13:52.116242 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba\": container with ID starting with fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba not found: ID does not exist" containerID="fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.116263 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba"} err="failed to get container status \"fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba\": rpc error: code = NotFound desc = could not find container \"fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba\": container with ID starting with fb314a21acec336608e67a5661fb4f05b5457836886818760bb5ec23bd48e8ba not found: ID does not exist" Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.200751 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:52 crc kubenswrapper[4827]: I0126 10:13:52.208964 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n9c5m"] Jan 26 10:13:53 crc kubenswrapper[4827]: I0126 10:13:53.722095 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" path="/var/lib/kubelet/pods/7d7b1c8a-3637-442f-90d5-8b318761b368/volumes" Jan 26 10:14:12 crc kubenswrapper[4827]: I0126 10:14:12.268476 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:14:12 crc kubenswrapper[4827]: I0126 10:14:12.270191 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:14:42 crc kubenswrapper[4827]: I0126 10:14:42.269264 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:14:42 crc kubenswrapper[4827]: I0126 10:14:42.269989 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.204807 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx"] Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.205636 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206103 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.206130 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="extract-content" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206137 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="extract-content" Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.206151 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206156 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.206174 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="extract-utilities" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206181 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="extract-utilities" Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.206193 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="extract-content" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206199 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="extract-content" Jan 26 10:15:00 crc kubenswrapper[4827]: E0126 10:15:00.206214 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="extract-utilities" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206219 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="extract-utilities" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206409 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7b1c8a-3637-442f-90d5-8b318761b368" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.206433 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e97677-8f9d-4bfe-9cc6-5661667a20e8" containerName="registry-server" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.207058 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.212587 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.212586 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.225187 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44td\" (UniqueName: \"kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.225259 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.225301 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.273594 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx"] Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.327329 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.327626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.328025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s44td\" (UniqueName: \"kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.329625 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.344806 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.345625 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44td\" (UniqueName: \"kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td\") pod \"collect-profiles-29490375-sb6jx\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:00 crc kubenswrapper[4827]: I0126 10:15:00.536265 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:01 crc kubenswrapper[4827]: I0126 10:15:01.060003 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx"] Jan 26 10:15:01 crc kubenswrapper[4827]: I0126 10:15:01.201898 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" event={"ID":"6b9c86aa-e083-4018-bf4e-d469ed5d713b","Type":"ContainerStarted","Data":"cb702dc1bc5bfdcedfc42a28b09353e9e150b33db65ec7339afd675610bf2b52"} Jan 26 10:15:01 crc kubenswrapper[4827]: I0126 10:15:01.202271 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" event={"ID":"6b9c86aa-e083-4018-bf4e-d469ed5d713b","Type":"ContainerStarted","Data":"dd9de75ef983fb35ad6b7c7e4ef3b8b5cbcd6fd7c5f90147c03f36f6f7baf03a"} Jan 26 10:15:01 crc kubenswrapper[4827]: I0126 10:15:01.239207 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" podStartSLOduration=1.239180413 podStartE2EDuration="1.239180413s" podCreationTimestamp="2026-01-26 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:15:01.228911944 +0000 UTC m=+4129.877583763" watchObservedRunningTime="2026-01-26 10:15:01.239180413 +0000 UTC m=+4129.887852252" Jan 26 10:15:02 crc kubenswrapper[4827]: I0126 10:15:02.211779 4827 generic.go:334] "Generic (PLEG): container finished" podID="6b9c86aa-e083-4018-bf4e-d469ed5d713b" containerID="cb702dc1bc5bfdcedfc42a28b09353e9e150b33db65ec7339afd675610bf2b52" exitCode=0 Jan 26 10:15:02 crc kubenswrapper[4827]: I0126 10:15:02.211842 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" event={"ID":"6b9c86aa-e083-4018-bf4e-d469ed5d713b","Type":"ContainerDied","Data":"cb702dc1bc5bfdcedfc42a28b09353e9e150b33db65ec7339afd675610bf2b52"} Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.586796 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.593238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s44td\" (UniqueName: \"kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td\") pod \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.593281 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume\") pod \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.593311 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume\") pod \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\" (UID: \"6b9c86aa-e083-4018-bf4e-d469ed5d713b\") " Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.593876 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b9c86aa-e083-4018-bf4e-d469ed5d713b" (UID: "6b9c86aa-e083-4018-bf4e-d469ed5d713b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.599613 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td" (OuterVolumeSpecName: "kube-api-access-s44td") pod "6b9c86aa-e083-4018-bf4e-d469ed5d713b" (UID: "6b9c86aa-e083-4018-bf4e-d469ed5d713b"). InnerVolumeSpecName "kube-api-access-s44td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.604776 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b9c86aa-e083-4018-bf4e-d469ed5d713b" (UID: "6b9c86aa-e083-4018-bf4e-d469ed5d713b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.695411 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s44td\" (UniqueName: \"kubernetes.io/projected/6b9c86aa-e083-4018-bf4e-d469ed5d713b-kube-api-access-s44td\") on node \"crc\" DevicePath \"\"" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.695442 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b9c86aa-e083-4018-bf4e-d469ed5d713b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:15:03 crc kubenswrapper[4827]: I0126 10:15:03.695451 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b9c86aa-e083-4018-bf4e-d469ed5d713b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:15:04 crc kubenswrapper[4827]: I0126 10:15:04.239218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" event={"ID":"6b9c86aa-e083-4018-bf4e-d469ed5d713b","Type":"ContainerDied","Data":"dd9de75ef983fb35ad6b7c7e4ef3b8b5cbcd6fd7c5f90147c03f36f6f7baf03a"} Jan 26 10:15:04 crc kubenswrapper[4827]: I0126 10:15:04.239276 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9de75ef983fb35ad6b7c7e4ef3b8b5cbcd6fd7c5f90147c03f36f6f7baf03a" Jan 26 10:15:04 crc kubenswrapper[4827]: I0126 10:15:04.239859 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490375-sb6jx" Jan 26 10:15:04 crc kubenswrapper[4827]: I0126 10:15:04.330252 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km"] Jan 26 10:15:04 crc kubenswrapper[4827]: I0126 10:15:04.337161 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-q78km"] Jan 26 10:15:05 crc kubenswrapper[4827]: I0126 10:15:05.716687 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca69756-a95e-4358-9238-8ebf213dd239" path="/var/lib/kubelet/pods/6ca69756-a95e-4358-9238-8ebf213dd239/volumes" Jan 26 10:15:12 crc kubenswrapper[4827]: I0126 10:15:12.268256 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:15:12 crc kubenswrapper[4827]: I0126 10:15:12.268735 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:15:12 crc kubenswrapper[4827]: I0126 10:15:12.268793 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:15:12 crc kubenswrapper[4827]: I0126 10:15:12.269434 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:15:12 crc kubenswrapper[4827]: I0126 10:15:12.269500 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece" gracePeriod=600 Jan 26 10:15:13 crc kubenswrapper[4827]: I0126 10:15:13.326114 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece" exitCode=0 Jan 26 10:15:13 crc kubenswrapper[4827]: I0126 10:15:13.326171 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece"} Jan 26 10:15:13 crc kubenswrapper[4827]: I0126 10:15:13.326656 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8"} Jan 26 10:15:13 crc kubenswrapper[4827]: I0126 10:15:13.326675 4827 scope.go:117] "RemoveContainer" containerID="dba907e09a9f5fc052450c3b5a6b0492d210e661dfc65af690af40013296980c" Jan 26 10:15:19 crc kubenswrapper[4827]: I0126 10:15:19.343553 4827 scope.go:117] "RemoveContainer" containerID="09cccfb2652366ecc5011d0b643001a192fb49dc24233023a7fe55251f095123" Jan 26 10:17:12 crc kubenswrapper[4827]: I0126 10:17:12.268972 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:17:12 crc kubenswrapper[4827]: I0126 10:17:12.269254 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:17:42 crc kubenswrapper[4827]: I0126 10:17:42.268882 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:17:42 crc kubenswrapper[4827]: I0126 10:17:42.269541 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.269145 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.269826 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.269886 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.270696 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.270788 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" gracePeriod=600 Jan 26 10:18:12 crc kubenswrapper[4827]: E0126 10:18:12.925004 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.942469 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" exitCode=0 Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.942528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8"} Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.942574 4827 scope.go:117] "RemoveContainer" containerID="98da6af7b104e7b43e8e00fd053b935f563f6e6c4f38a597785ce5be62c07ece" Jan 26 10:18:12 crc kubenswrapper[4827]: I0126 10:18:12.943305 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:18:12 crc kubenswrapper[4827]: E0126 10:18:12.943787 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:18:23 crc kubenswrapper[4827]: I0126 10:18:23.703217 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:18:23 crc kubenswrapper[4827]: E0126 10:18:23.704309 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:18:32 crc kubenswrapper[4827]: I0126 10:18:32.171755 4827 generic.go:334] "Generic (PLEG): container finished" podID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" containerID="96717f14d420003389721918ba734702443f03113a1c5e574b04b35348a34fd8" exitCode=0 Jan 26 10:18:32 crc kubenswrapper[4827]: I0126 10:18:32.172827 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3afb0c8-7da0-4f91-a689-921ef566e7a2","Type":"ContainerDied","Data":"96717f14d420003389721918ba734702443f03113a1c5e574b04b35348a34fd8"} Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.517881 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.660680 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.660724 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.660751 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661514 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661556 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661590 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661611 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661684 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.661707 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data\") pod \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\" (UID: \"a3afb0c8-7da0-4f91-a689-921ef566e7a2\") " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.662468 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.662879 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data" (OuterVolumeSpecName: "config-data") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.666562 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.667983 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4" (OuterVolumeSpecName: "kube-api-access-nscb4") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "kube-api-access-nscb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.693839 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.699583 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.705005 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.724902 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.745725 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a3afb0c8-7da0-4f91-a689-921ef566e7a2" (UID: "a3afb0c8-7da0-4f91-a689-921ef566e7a2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764167 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764274 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3afb0c8-7da0-4f91-a689-921ef566e7a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764291 4827 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764304 4827 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764319 4827 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3afb0c8-7da0-4f91-a689-921ef566e7a2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764332 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764373 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3afb0c8-7da0-4f91-a689-921ef566e7a2-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764402 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.764415 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/a3afb0c8-7da0-4f91-a689-921ef566e7a2-kube-api-access-nscb4\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.790750 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 10:18:33 crc kubenswrapper[4827]: I0126 10:18:33.866340 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 10:18:34 crc kubenswrapper[4827]: I0126 10:18:34.197252 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3afb0c8-7da0-4f91-a689-921ef566e7a2","Type":"ContainerDied","Data":"624c761f979e1d068732e32ee9d613da34943c9d9d920d1fae85786ce414a144"} Jan 26 10:18:34 crc kubenswrapper[4827]: I0126 10:18:34.197530 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624c761f979e1d068732e32ee9d613da34943c9d9d920d1fae85786ce414a144" Jan 26 10:18:34 crc kubenswrapper[4827]: I0126 10:18:34.197678 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 10:18:36 crc kubenswrapper[4827]: I0126 10:18:36.704139 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:18:36 crc kubenswrapper[4827]: E0126 10:18:36.704481 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.161258 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 10:18:39 crc kubenswrapper[4827]: E0126 10:18:39.162392 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" containerName="tempest-tests-tempest-tests-runner" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.162413 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" containerName="tempest-tests-tempest-tests-runner" Jan 26 10:18:39 crc kubenswrapper[4827]: E0126 10:18:39.162452 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9c86aa-e083-4018-bf4e-d469ed5d713b" containerName="collect-profiles" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.162489 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9c86aa-e083-4018-bf4e-d469ed5d713b" containerName="collect-profiles" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.162778 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3afb0c8-7da0-4f91-a689-921ef566e7a2" containerName="tempest-tests-tempest-tests-runner" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.162800 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9c86aa-e083-4018-bf4e-d469ed5d713b" containerName="collect-profiles" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.163441 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.165921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-znm65" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.173013 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.281138 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z4k\" (UniqueName: \"kubernetes.io/projected/c074f00d-8c21-4bab-9019-138c164586fc-kube-api-access-42z4k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.281328 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.383970 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42z4k\" (UniqueName: \"kubernetes.io/projected/c074f00d-8c21-4bab-9019-138c164586fc-kube-api-access-42z4k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.384087 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.384765 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.413450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42z4k\" (UniqueName: \"kubernetes.io/projected/c074f00d-8c21-4bab-9019-138c164586fc-kube-api-access-42z4k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.419818 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c074f00d-8c21-4bab-9019-138c164586fc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:39 crc kubenswrapper[4827]: I0126 10:18:39.481391 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 10:18:40 crc kubenswrapper[4827]: I0126 10:18:40.054693 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 10:18:40 crc kubenswrapper[4827]: I0126 10:18:40.059938 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:18:40 crc kubenswrapper[4827]: I0126 10:18:40.261950 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c074f00d-8c21-4bab-9019-138c164586fc","Type":"ContainerStarted","Data":"e8a77d09974e8e1eab2b91991a8ce695123e1e979dfe4deaccd2434cdbaa55b3"} Jan 26 10:18:42 crc kubenswrapper[4827]: I0126 10:18:42.280759 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c074f00d-8c21-4bab-9019-138c164586fc","Type":"ContainerStarted","Data":"bf2695706a61f00d5b25ada2fd7f15fa9d802f4ae54f4964fd7c483378c82fe6"} Jan 26 10:18:42 crc kubenswrapper[4827]: I0126 10:18:42.301949 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.596987095 podStartE2EDuration="3.301924747s" podCreationTimestamp="2026-01-26 10:18:39 +0000 UTC" firstStartedPulling="2026-01-26 10:18:40.059747848 +0000 UTC m=+4348.708419667" lastFinishedPulling="2026-01-26 10:18:41.76468546 +0000 UTC m=+4350.413357319" observedRunningTime="2026-01-26 10:18:42.301054264 +0000 UTC m=+4350.949726083" watchObservedRunningTime="2026-01-26 10:18:42.301924747 +0000 UTC m=+4350.950596576" Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.896998 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.900822 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.912903 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.951046 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.951113 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:46 crc kubenswrapper[4827]: I0126 10:18:46.951156 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vvk\" (UniqueName: \"kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.053023 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.053079 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.053108 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8vvk\" (UniqueName: \"kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.053542 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.053600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.071559 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8vvk\" (UniqueName: \"kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk\") pod \"community-operators-gt2pr\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.247578 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:47 crc kubenswrapper[4827]: I0126 10:18:47.793147 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:18:47 crc kubenswrapper[4827]: W0126 10:18:47.806019 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f4b744_e495_445f_a3c6_06cb41107cd7.slice/crio-b866bb8cff7fd9034f8dcb20b7513e5d14c2ec7228d25b7aa1f6e0469c13d941 WatchSource:0}: Error finding container b866bb8cff7fd9034f8dcb20b7513e5d14c2ec7228d25b7aa1f6e0469c13d941: Status 404 returned error can't find the container with id b866bb8cff7fd9034f8dcb20b7513e5d14c2ec7228d25b7aa1f6e0469c13d941 Jan 26 10:18:48 crc kubenswrapper[4827]: I0126 10:18:48.359256 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerID="c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4" exitCode=0 Jan 26 10:18:48 crc kubenswrapper[4827]: I0126 10:18:48.359327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerDied","Data":"c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4"} Jan 26 10:18:48 crc kubenswrapper[4827]: I0126 10:18:48.359578 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerStarted","Data":"b866bb8cff7fd9034f8dcb20b7513e5d14c2ec7228d25b7aa1f6e0469c13d941"} Jan 26 10:18:49 crc kubenswrapper[4827]: I0126 10:18:49.373709 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerStarted","Data":"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97"} Jan 26 10:18:50 crc kubenswrapper[4827]: I0126 10:18:50.392141 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerID="f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97" exitCode=0 Jan 26 10:18:50 crc kubenswrapper[4827]: I0126 10:18:50.392446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerDied","Data":"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97"} Jan 26 10:18:51 crc kubenswrapper[4827]: I0126 10:18:51.402371 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerStarted","Data":"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f"} Jan 26 10:18:51 crc kubenswrapper[4827]: I0126 10:18:51.427079 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gt2pr" podStartSLOduration=3.001681623 podStartE2EDuration="5.42705728s" podCreationTimestamp="2026-01-26 10:18:46 +0000 UTC" firstStartedPulling="2026-01-26 10:18:48.362247884 +0000 UTC m=+4357.010919703" lastFinishedPulling="2026-01-26 10:18:50.787623541 +0000 UTC m=+4359.436295360" observedRunningTime="2026-01-26 10:18:51.42094778 +0000 UTC m=+4360.069619609" watchObservedRunningTime="2026-01-26 10:18:51.42705728 +0000 UTC m=+4360.075729099" Jan 26 10:18:51 crc kubenswrapper[4827]: I0126 10:18:51.708359 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:18:51 crc kubenswrapper[4827]: E0126 10:18:51.708714 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:18:57 crc kubenswrapper[4827]: I0126 10:18:57.248299 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:57 crc kubenswrapper[4827]: I0126 10:18:57.249016 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:57 crc kubenswrapper[4827]: I0126 10:18:57.329988 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:57 crc kubenswrapper[4827]: I0126 10:18:57.502968 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:18:57 crc kubenswrapper[4827]: I0126 10:18:57.579438 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:18:59 crc kubenswrapper[4827]: I0126 10:18:59.472309 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gt2pr" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="registry-server" containerID="cri-o://e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f" gracePeriod=2 Jan 26 10:18:59 crc kubenswrapper[4827]: I0126 10:18:59.943726 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.023174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8vvk\" (UniqueName: \"kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk\") pod \"e7f4b744-e495-445f-a3c6-06cb41107cd7\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.023602 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content\") pod \"e7f4b744-e495-445f-a3c6-06cb41107cd7\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.023653 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities\") pod \"e7f4b744-e495-445f-a3c6-06cb41107cd7\" (UID: \"e7f4b744-e495-445f-a3c6-06cb41107cd7\") " Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.024531 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities" (OuterVolumeSpecName: "utilities") pod "e7f4b744-e495-445f-a3c6-06cb41107cd7" (UID: "e7f4b744-e495-445f-a3c6-06cb41107cd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.028448 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk" (OuterVolumeSpecName: "kube-api-access-w8vvk") pod "e7f4b744-e495-445f-a3c6-06cb41107cd7" (UID: "e7f4b744-e495-445f-a3c6-06cb41107cd7"). InnerVolumeSpecName "kube-api-access-w8vvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.108801 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7f4b744-e495-445f-a3c6-06cb41107cd7" (UID: "e7f4b744-e495-445f-a3c6-06cb41107cd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.127194 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8vvk\" (UniqueName: \"kubernetes.io/projected/e7f4b744-e495-445f-a3c6-06cb41107cd7-kube-api-access-w8vvk\") on node \"crc\" DevicePath \"\"" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.127256 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.127277 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7f4b744-e495-445f-a3c6-06cb41107cd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.482821 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerID="e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f" exitCode=0 Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.482907 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt2pr" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.482918 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerDied","Data":"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f"} Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.483953 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt2pr" event={"ID":"e7f4b744-e495-445f-a3c6-06cb41107cd7","Type":"ContainerDied","Data":"b866bb8cff7fd9034f8dcb20b7513e5d14c2ec7228d25b7aa1f6e0469c13d941"} Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.483990 4827 scope.go:117] "RemoveContainer" containerID="e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.508129 4827 scope.go:117] "RemoveContainer" containerID="f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.528572 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.538796 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gt2pr"] Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.545550 4827 scope.go:117] "RemoveContainer" containerID="c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.586114 4827 scope.go:117] "RemoveContainer" containerID="e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f" Jan 26 10:19:00 crc kubenswrapper[4827]: E0126 10:19:00.586560 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f\": container with ID starting with e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f not found: ID does not exist" containerID="e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.586610 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f"} err="failed to get container status \"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f\": rpc error: code = NotFound desc = could not find container \"e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f\": container with ID starting with e2318c2faebdaa4f804dd77f4c2e7fdd63387941337304cf2b2869852b273c7f not found: ID does not exist" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.586677 4827 scope.go:117] "RemoveContainer" containerID="f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97" Jan 26 10:19:00 crc kubenswrapper[4827]: E0126 10:19:00.587073 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97\": container with ID starting with f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97 not found: ID does not exist" containerID="f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.587132 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97"} err="failed to get container status \"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97\": rpc error: code = NotFound desc = could not find container \"f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97\": container with ID starting with f0e3f261a3b2ae51dee475a7ddbca024b94fa367be8a29b32d08d026bb387a97 not found: ID does not exist" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.587163 4827 scope.go:117] "RemoveContainer" containerID="c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4" Jan 26 10:19:00 crc kubenswrapper[4827]: E0126 10:19:00.587453 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4\": container with ID starting with c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4 not found: ID does not exist" containerID="c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4" Jan 26 10:19:00 crc kubenswrapper[4827]: I0126 10:19:00.587550 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4"} err="failed to get container status \"c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4\": rpc error: code = NotFound desc = could not find container \"c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4\": container with ID starting with c91042b7fe5a84b9304e91c110054fdce42d75c63612f57f62a14b7a7e95fdb4 not found: ID does not exist" Jan 26 10:19:01 crc kubenswrapper[4827]: I0126 10:19:01.721048 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" path="/var/lib/kubelet/pods/e7f4b744-e495-445f-a3c6-06cb41107cd7/volumes" Jan 26 10:19:03 crc kubenswrapper[4827]: I0126 10:19:03.703496 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:19:03 crc kubenswrapper[4827]: E0126 10:19:03.704399 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.325746 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tphk2/must-gather-82mc4"] Jan 26 10:19:07 crc kubenswrapper[4827]: E0126 10:19:07.327270 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="extract-utilities" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.327346 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="extract-utilities" Jan 26 10:19:07 crc kubenswrapper[4827]: E0126 10:19:07.327419 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="registry-server" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.327475 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="registry-server" Jan 26 10:19:07 crc kubenswrapper[4827]: E0126 10:19:07.327541 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="extract-content" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.327592 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="extract-content" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.327825 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f4b744-e495-445f-a3c6-06cb41107cd7" containerName="registry-server" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.328805 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.331383 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tphk2"/"kube-root-ca.crt" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.331744 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tphk2"/"openshift-service-ca.crt" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.332023 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tphk2"/"default-dockercfg-wtkn6" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.338009 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tphk2/must-gather-82mc4"] Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.406684 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.406732 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgs4\" (UniqueName: \"kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.508025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.508068 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgs4\" (UniqueName: \"kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.508693 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.530025 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgs4\" (UniqueName: \"kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4\") pod \"must-gather-82mc4\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:07 crc kubenswrapper[4827]: I0126 10:19:07.689663 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:19:08 crc kubenswrapper[4827]: I0126 10:19:08.166689 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tphk2/must-gather-82mc4"] Jan 26 10:19:08 crc kubenswrapper[4827]: W0126 10:19:08.413712 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5733bfc_a425_4a49_b57a_cb6e861764ab.slice/crio-311c0c2ab04418d26f9e33a6a6830b68f7ecb0f2a2592edda8d80e3dbc26cbcf WatchSource:0}: Error finding container 311c0c2ab04418d26f9e33a6a6830b68f7ecb0f2a2592edda8d80e3dbc26cbcf: Status 404 returned error can't find the container with id 311c0c2ab04418d26f9e33a6a6830b68f7ecb0f2a2592edda8d80e3dbc26cbcf Jan 26 10:19:08 crc kubenswrapper[4827]: I0126 10:19:08.566391 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/must-gather-82mc4" event={"ID":"d5733bfc-a425-4a49-b57a-cb6e861764ab","Type":"ContainerStarted","Data":"311c0c2ab04418d26f9e33a6a6830b68f7ecb0f2a2592edda8d80e3dbc26cbcf"} Jan 26 10:19:15 crc kubenswrapper[4827]: I0126 10:19:15.703108 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:19:15 crc kubenswrapper[4827]: E0126 10:19:15.703681 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:19:17 crc kubenswrapper[4827]: I0126 10:19:17.651579 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/must-gather-82mc4" event={"ID":"d5733bfc-a425-4a49-b57a-cb6e861764ab","Type":"ContainerStarted","Data":"4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e"} Jan 26 10:19:17 crc kubenswrapper[4827]: I0126 10:19:17.651929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/must-gather-82mc4" event={"ID":"d5733bfc-a425-4a49-b57a-cb6e861764ab","Type":"ContainerStarted","Data":"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0"} Jan 26 10:19:17 crc kubenswrapper[4827]: I0126 10:19:17.679034 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tphk2/must-gather-82mc4" podStartSLOduration=2.306575939 podStartE2EDuration="10.679015975s" podCreationTimestamp="2026-01-26 10:19:07 +0000 UTC" firstStartedPulling="2026-01-26 10:19:08.416897785 +0000 UTC m=+4377.065569604" lastFinishedPulling="2026-01-26 10:19:16.789337821 +0000 UTC m=+4385.438009640" observedRunningTime="2026-01-26 10:19:17.673272338 +0000 UTC m=+4386.321944177" watchObservedRunningTime="2026-01-26 10:19:17.679015975 +0000 UTC m=+4386.327687814" Jan 26 10:19:22 crc kubenswrapper[4827]: I0126 10:19:22.790818 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tphk2/crc-debug-ns4r9"] Jan 26 10:19:22 crc kubenswrapper[4827]: I0126 10:19:22.792306 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:22 crc kubenswrapper[4827]: I0126 10:19:22.956468 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:22 crc kubenswrapper[4827]: I0126 10:19:22.956927 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbz8\" (UniqueName: \"kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.058916 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtbz8\" (UniqueName: \"kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.059100 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.059203 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.089200 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtbz8\" (UniqueName: \"kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8\") pod \"crc-debug-ns4r9\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.108322 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:19:23 crc kubenswrapper[4827]: I0126 10:19:23.718257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" event={"ID":"8bbf85c2-522b-4241-825c-770ccd8d8d8e","Type":"ContainerStarted","Data":"a07f806ec1d4651a4b373b8152419837f4f01b1049d4a98630db84e8f8e9895f"} Jan 26 10:19:26 crc kubenswrapper[4827]: I0126 10:19:26.703062 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:19:26 crc kubenswrapper[4827]: E0126 10:19:26.703418 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:19:35 crc kubenswrapper[4827]: I0126 10:19:35.811847 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" event={"ID":"8bbf85c2-522b-4241-825c-770ccd8d8d8e","Type":"ContainerStarted","Data":"4f7456d88f05e7c3b2f351fe5d9c0c6b9df7edf5c330a729ab62c7fd4dc82524"} Jan 26 10:19:35 crc kubenswrapper[4827]: I0126 10:19:35.832388 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" podStartSLOduration=1.997410188 podStartE2EDuration="13.832355753s" podCreationTimestamp="2026-01-26 10:19:22 +0000 UTC" firstStartedPulling="2026-01-26 10:19:23.139131486 +0000 UTC m=+4391.787803305" lastFinishedPulling="2026-01-26 10:19:34.974077051 +0000 UTC m=+4403.622748870" observedRunningTime="2026-01-26 10:19:35.824702073 +0000 UTC m=+4404.473373882" watchObservedRunningTime="2026-01-26 10:19:35.832355753 +0000 UTC m=+4404.481027572" Jan 26 10:19:37 crc kubenswrapper[4827]: I0126 10:19:37.703387 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:19:37 crc kubenswrapper[4827]: E0126 10:19:37.704194 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:19:52 crc kubenswrapper[4827]: I0126 10:19:52.702479 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:19:52 crc kubenswrapper[4827]: E0126 10:19:52.703251 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:20:05 crc kubenswrapper[4827]: I0126 10:20:05.707831 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:20:05 crc kubenswrapper[4827]: E0126 10:20:05.709578 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:20:16 crc kubenswrapper[4827]: I0126 10:20:16.704064 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:20:16 crc kubenswrapper[4827]: E0126 10:20:16.704599 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:20:18 crc kubenswrapper[4827]: I0126 10:20:18.188380 4827 generic.go:334] "Generic (PLEG): container finished" podID="8bbf85c2-522b-4241-825c-770ccd8d8d8e" containerID="4f7456d88f05e7c3b2f351fe5d9c0c6b9df7edf5c330a729ab62c7fd4dc82524" exitCode=0 Jan 26 10:20:18 crc kubenswrapper[4827]: I0126 10:20:18.188581 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" event={"ID":"8bbf85c2-522b-4241-825c-770ccd8d8d8e","Type":"ContainerDied","Data":"4f7456d88f05e7c3b2f351fe5d9c0c6b9df7edf5c330a729ab62c7fd4dc82524"} Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.557548 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.595304 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-ns4r9"] Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.606046 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-ns4r9"] Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.637913 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host\") pod \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.638025 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtbz8\" (UniqueName: \"kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8\") pod \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\" (UID: \"8bbf85c2-522b-4241-825c-770ccd8d8d8e\") " Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.638252 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host" (OuterVolumeSpecName: "host") pod "8bbf85c2-522b-4241-825c-770ccd8d8d8e" (UID: "8bbf85c2-522b-4241-825c-770ccd8d8d8e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.638573 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bbf85c2-522b-4241-825c-770ccd8d8d8e-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.648372 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8" (OuterVolumeSpecName: "kube-api-access-gtbz8") pod "8bbf85c2-522b-4241-825c-770ccd8d8d8e" (UID: "8bbf85c2-522b-4241-825c-770ccd8d8d8e"). InnerVolumeSpecName "kube-api-access-gtbz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.711875 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbf85c2-522b-4241-825c-770ccd8d8d8e" path="/var/lib/kubelet/pods/8bbf85c2-522b-4241-825c-770ccd8d8d8e/volumes" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.740765 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtbz8\" (UniqueName: \"kubernetes.io/projected/8bbf85c2-522b-4241-825c-770ccd8d8d8e-kube-api-access-gtbz8\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.843018 4827 scope.go:117] "RemoveContainer" containerID="f17bb2ef34b129228e8b2d5c194b1bac7da1e73b2bf113e79f940f05ba7e438a" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.870041 4827 scope.go:117] "RemoveContainer" containerID="8d1f11a51c2198b00aaa0bd4434efc3b08586516a7eacd900b4964f9d0e500fb" Jan 26 10:20:19 crc kubenswrapper[4827]: I0126 10:20:19.911674 4827 scope.go:117] "RemoveContainer" containerID="80929512c9a8cde369316ca9628747adb004d0e27032c9ae39b4795051c6190b" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.206848 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-ns4r9" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.206848 4827 scope.go:117] "RemoveContainer" containerID="4f7456d88f05e7c3b2f351fe5d9c0c6b9df7edf5c330a729ab62c7fd4dc82524" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.909982 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tphk2/crc-debug-zmhfp"] Jan 26 10:20:20 crc kubenswrapper[4827]: E0126 10:20:20.910700 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbf85c2-522b-4241-825c-770ccd8d8d8e" containerName="container-00" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.910716 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbf85c2-522b-4241-825c-770ccd8d8d8e" containerName="container-00" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.910941 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbf85c2-522b-4241-825c-770ccd8d8d8e" containerName="container-00" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.911669 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.962072 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg58z\" (UniqueName: \"kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:20 crc kubenswrapper[4827]: I0126 10:20:20.962305 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:21 crc kubenswrapper[4827]: I0126 10:20:21.063784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:21 crc kubenswrapper[4827]: I0126 10:20:21.063924 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg58z\" (UniqueName: \"kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:21 crc kubenswrapper[4827]: I0126 10:20:21.063921 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:21 crc kubenswrapper[4827]: I0126 10:20:21.081412 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg58z\" (UniqueName: \"kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z\") pod \"crc-debug-zmhfp\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:21 crc kubenswrapper[4827]: I0126 10:20:21.224373 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:22 crc kubenswrapper[4827]: I0126 10:20:22.233388 4827 generic.go:334] "Generic (PLEG): container finished" podID="55f1eae0-e362-4ff2-9106-d45ee8beb50a" containerID="e776b840969b6e0089e17e71f85aceb35122077c281d9b46ca7ade7292e81ef5" exitCode=0 Jan 26 10:20:22 crc kubenswrapper[4827]: I0126 10:20:22.233501 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" event={"ID":"55f1eae0-e362-4ff2-9106-d45ee8beb50a","Type":"ContainerDied","Data":"e776b840969b6e0089e17e71f85aceb35122077c281d9b46ca7ade7292e81ef5"} Jan 26 10:20:22 crc kubenswrapper[4827]: I0126 10:20:22.233971 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" event={"ID":"55f1eae0-e362-4ff2-9106-d45ee8beb50a","Type":"ContainerStarted","Data":"5d95cfceadd55d67b019c3046f8b87994c79e4b35b484092fa1ac64fec5c4f0a"} Jan 26 10:20:22 crc kubenswrapper[4827]: I0126 10:20:22.587781 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-zmhfp"] Jan 26 10:20:22 crc kubenswrapper[4827]: I0126 10:20:22.594125 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-zmhfp"] Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.334765 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.404965 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg58z\" (UniqueName: \"kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z\") pod \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.405238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host\") pod \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\" (UID: \"55f1eae0-e362-4ff2-9106-d45ee8beb50a\") " Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.405349 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host" (OuterVolumeSpecName: "host") pod "55f1eae0-e362-4ff2-9106-d45ee8beb50a" (UID: "55f1eae0-e362-4ff2-9106-d45ee8beb50a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.405966 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f1eae0-e362-4ff2-9106-d45ee8beb50a-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.412928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z" (OuterVolumeSpecName: "kube-api-access-jg58z") pod "55f1eae0-e362-4ff2-9106-d45ee8beb50a" (UID: "55f1eae0-e362-4ff2-9106-d45ee8beb50a"). InnerVolumeSpecName "kube-api-access-jg58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.508083 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg58z\" (UniqueName: \"kubernetes.io/projected/55f1eae0-e362-4ff2-9106-d45ee8beb50a-kube-api-access-jg58z\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.711891 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f1eae0-e362-4ff2-9106-d45ee8beb50a" path="/var/lib/kubelet/pods/55f1eae0-e362-4ff2-9106-d45ee8beb50a/volumes" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.924175 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tphk2/crc-debug-wrqvg"] Jan 26 10:20:23 crc kubenswrapper[4827]: E0126 10:20:23.924984 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f1eae0-e362-4ff2-9106-d45ee8beb50a" containerName="container-00" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.925005 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f1eae0-e362-4ff2-9106-d45ee8beb50a" containerName="container-00" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.925230 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f1eae0-e362-4ff2-9106-d45ee8beb50a" containerName="container-00" Jan 26 10:20:23 crc kubenswrapper[4827]: I0126 10:20:23.927104 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.018572 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.018663 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-948p7\" (UniqueName: \"kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.120925 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-948p7\" (UniqueName: \"kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.121147 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.121291 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.143215 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-948p7\" (UniqueName: \"kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7\") pod \"crc-debug-wrqvg\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.243696 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.251085 4827 scope.go:117] "RemoveContainer" containerID="e776b840969b6e0089e17e71f85aceb35122077c281d9b46ca7ade7292e81ef5" Jan 26 10:20:24 crc kubenswrapper[4827]: I0126 10:20:24.251234 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-zmhfp" Jan 26 10:20:25 crc kubenswrapper[4827]: I0126 10:20:25.262082 4827 generic.go:334] "Generic (PLEG): container finished" podID="b6ad70d5-0382-4297-8dfa-b60b1f430645" containerID="26d701a4b17e23a612fdd77dee3b3d7c783188af95397c96de71a4fa096b1b25" exitCode=0 Jan 26 10:20:25 crc kubenswrapper[4827]: I0126 10:20:25.262335 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" event={"ID":"b6ad70d5-0382-4297-8dfa-b60b1f430645","Type":"ContainerDied","Data":"26d701a4b17e23a612fdd77dee3b3d7c783188af95397c96de71a4fa096b1b25"} Jan 26 10:20:25 crc kubenswrapper[4827]: I0126 10:20:25.262363 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" event={"ID":"b6ad70d5-0382-4297-8dfa-b60b1f430645","Type":"ContainerStarted","Data":"077f3b6bd1c6917c158e7e0610fe6d0bd3771eed9fc0c0b4447a13a53afc0a17"} Jan 26 10:20:25 crc kubenswrapper[4827]: I0126 10:20:25.313802 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-wrqvg"] Jan 26 10:20:25 crc kubenswrapper[4827]: I0126 10:20:25.324452 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tphk2/crc-debug-wrqvg"] Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.805031 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.872855 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host\") pod \"b6ad70d5-0382-4297-8dfa-b60b1f430645\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.872949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-948p7\" (UniqueName: \"kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7\") pod \"b6ad70d5-0382-4297-8dfa-b60b1f430645\" (UID: \"b6ad70d5-0382-4297-8dfa-b60b1f430645\") " Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.872957 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host" (OuterVolumeSpecName: "host") pod "b6ad70d5-0382-4297-8dfa-b60b1f430645" (UID: "b6ad70d5-0382-4297-8dfa-b60b1f430645"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.873334 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6ad70d5-0382-4297-8dfa-b60b1f430645-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.889892 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7" (OuterVolumeSpecName: "kube-api-access-948p7") pod "b6ad70d5-0382-4297-8dfa-b60b1f430645" (UID: "b6ad70d5-0382-4297-8dfa-b60b1f430645"). InnerVolumeSpecName "kube-api-access-948p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:20:26 crc kubenswrapper[4827]: I0126 10:20:26.974729 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-948p7\" (UniqueName: \"kubernetes.io/projected/b6ad70d5-0382-4297-8dfa-b60b1f430645-kube-api-access-948p7\") on node \"crc\" DevicePath \"\"" Jan 26 10:20:27 crc kubenswrapper[4827]: I0126 10:20:27.278050 4827 scope.go:117] "RemoveContainer" containerID="26d701a4b17e23a612fdd77dee3b3d7c783188af95397c96de71a4fa096b1b25" Jan 26 10:20:27 crc kubenswrapper[4827]: I0126 10:20:27.278106 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/crc-debug-wrqvg" Jan 26 10:20:27 crc kubenswrapper[4827]: I0126 10:20:27.715546 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ad70d5-0382-4297-8dfa-b60b1f430645" path="/var/lib/kubelet/pods/b6ad70d5-0382-4297-8dfa-b60b1f430645/volumes" Jan 26 10:20:29 crc kubenswrapper[4827]: I0126 10:20:29.712113 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:20:29 crc kubenswrapper[4827]: E0126 10:20:29.714217 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:20:42 crc kubenswrapper[4827]: I0126 10:20:42.703777 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:20:42 crc kubenswrapper[4827]: E0126 10:20:42.704665 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:20:55 crc kubenswrapper[4827]: I0126 10:20:55.703763 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:20:55 crc kubenswrapper[4827]: E0126 10:20:55.705985 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:21:09 crc kubenswrapper[4827]: I0126 10:21:09.703183 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:21:09 crc kubenswrapper[4827]: E0126 10:21:09.704284 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:21:23 crc kubenswrapper[4827]: I0126 10:21:23.702954 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:21:23 crc kubenswrapper[4827]: E0126 10:21:23.703768 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.033693 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-549f46df88-ldq7r_149ae16b-d620-417f-a9df-0ff3864c7d08/barbican-api/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.216418 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-549f46df88-ldq7r_149ae16b-d620-417f-a9df-0ff3864c7d08/barbican-api-log/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.272858 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7665698578-xljwl_13b70f90-293b-4b38-be0f-0e5bde0c5e85/barbican-keystone-listener/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.332933 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7665698578-xljwl_13b70f90-293b-4b38-be0f-0e5bde0c5e85/barbican-keystone-listener-log/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.515763 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bb7c7c765-wmktj_ed6134dd-363a-49bb-99bb-6bac419c845a/barbican-worker/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.640650 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bb7c7c765-wmktj_ed6134dd-363a-49bb-99bb-6bac419c845a/barbican-worker-log/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.818397 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k_e2b5fccf-d108-4563-9e78-16e31b6959bf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.904200 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/ceilometer-central-agent/0.log" Jan 26 10:21:30 crc kubenswrapper[4827]: I0126 10:21:30.931140 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/ceilometer-notification-agent/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.072996 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/sg-core/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.115886 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/proxy-httpd/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.130370 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff_9303c2b8-3943-40d2-b648-a7d24cf50214/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.354405 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k_c3096f06-9fdd-406d-9200-1fa4a2db5006/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.469886 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_07c0b8ae-368e-4e51-8686-6d5ce6def2a9/cinder-api/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.619153 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_07c0b8ae-368e-4e51-8686-6d5ce6def2a9/cinder-api-log/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.750889 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_98911844-c24c-42e7-bf54-ca3cfb5d77c5/probe/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.900515 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_98911844-c24c-42e7-bf54-ca3cfb5d77c5/cinder-backup/0.log" Jan 26 10:21:31 crc kubenswrapper[4827]: I0126 10:21:31.960936 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_aabf5c90-5a50-4950-a417-ddf73a2fe2ce/cinder-scheduler/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.031257 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_aabf5c90-5a50-4950-a417-ddf73a2fe2ce/probe/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.238849 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4313f2bb-7f66-41c6-9c0e-87ae0d9eea08/probe/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.294978 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4313f2bb-7f66-41c6-9c0e-87ae0d9eea08/cinder-volume/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.393682 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-f6x26_dffac4b8-657b-40f3-86cc-6138f70d889b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.585374 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-x26cv_c44eb77f-7f7f-461f-b0aa-cbd347852699/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.643133 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/init/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.862042 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/init/0.log" Jan 26 10:21:32 crc kubenswrapper[4827]: I0126 10:21:32.940845 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d29d9533-48d6-4314-8bab-835c6804dcd6/glance-httpd/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.015410 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/dnsmasq-dns/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.079920 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d29d9533-48d6-4314-8bab-835c6804dcd6/glance-log/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.256374 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_792cbe2a-cbf2-48f0-8eac-3c3d5b91538a/glance-httpd/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.272479 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_792cbe2a-cbf2-48f0-8eac-3c3d5b91538a/glance-log/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.547649 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8d4867b4-j5kkp_4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d/horizon/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.619322 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8d4867b4-j5kkp_4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d/horizon-log/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.669695 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7_c4931278-b623-46c2-8444-9a7b75093703/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:33 crc kubenswrapper[4827]: I0126 10:21:33.901700 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gffwb_e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.103798 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bb549d74c-6hlgt_053973de-195d-44a8-ba9f-d665b8a53c87/keystone-api/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.103922 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490361-2sb62_5f315ab7-c066-4333-93bb-1e479301743a/keystone-cron/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.151724 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9d36e60e-5a78-4ce6-8997-688333022bc0/kube-state-metrics/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.451526 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gfc92_9f2e9aa2-d136-40ad-a382-41abb6ce645a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.545805 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5176c3b1-983f-4339-aa88-18ed0df10566/manila-api/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.557445 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5176c3b1-983f-4339-aa88-18ed0df10566/manila-api-log/0.log" Jan 26 10:21:34 crc kubenswrapper[4827]: I0126 10:21:34.703420 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:21:34 crc kubenswrapper[4827]: E0126 10:21:34.703700 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.094786 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4d9dc34-401b-43d6-97a0-c628eb57f517/manila-scheduler/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.128565 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4d9dc34-401b-43d6-97a0-c628eb57f517/probe/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.227930 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0/manila-share/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.418503 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0/probe/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.698829 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bdc4699d9-tnd4c_09595eb4-a10d-44f0-9aee-927389e0accb/neutron-api/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.718523 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q_fd1780e3-c584-4f74-b260-cec896594153/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:35 crc kubenswrapper[4827]: I0126 10:21:35.747941 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bdc4699d9-tnd4c_09595eb4-a10d-44f0-9aee-927389e0accb/neutron-httpd/0.log" Jan 26 10:21:36 crc kubenswrapper[4827]: I0126 10:21:36.436263 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f509fce4-52e1-4f74-8cfa-cfe156852aed/nova-api-log/0.log" Jan 26 10:21:36 crc kubenswrapper[4827]: I0126 10:21:36.623890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f509fce4-52e1-4f74-8cfa-cfe156852aed/nova-api-api/0.log" Jan 26 10:21:36 crc kubenswrapper[4827]: I0126 10:21:36.718237 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ef7d553f-5037-4ed5-9d99-c278f206381e/nova-cell0-conductor-conductor/0.log" Jan 26 10:21:36 crc kubenswrapper[4827]: I0126 10:21:36.729282 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a7163549-12a8-403d-b952-a03566f40771/nova-cell1-conductor-conductor/0.log" Jan 26 10:21:36 crc kubenswrapper[4827]: I0126 10:21:36.973733 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fd4d28c4-421a-40a9-8629-e832e0aa002f/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.158507 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v_62a102b4-e915-4a42-a644-91624460cb06/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.391618 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20987ce4-16e9-4364-9742-44454d336e33/nova-metadata-log/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.703396 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_21741788-081f-4f17-973d-ae145a0469ff/nova-scheduler-scheduler/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.747296 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/mysql-bootstrap/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.952321 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/mysql-bootstrap/0.log" Jan 26 10:21:37 crc kubenswrapper[4827]: I0126 10:21:37.975931 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/galera/0.log" Jan 26 10:21:38 crc kubenswrapper[4827]: I0126 10:21:38.179902 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/mysql-bootstrap/0.log" Jan 26 10:21:38 crc kubenswrapper[4827]: I0126 10:21:38.476274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/galera/0.log" Jan 26 10:21:38 crc kubenswrapper[4827]: I0126 10:21:38.506561 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/mysql-bootstrap/0.log" Jan 26 10:21:38 crc kubenswrapper[4827]: I0126 10:21:38.786072 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a74b1cb5-e36a-49d0-b075-f3f269487645/openstackclient/0.log" Jan 26 10:21:38 crc kubenswrapper[4827]: I0126 10:21:38.828007 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9dxlf_0ff255bf-dcef-4418-ac66-802299400786/openstack-network-exporter/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.016943 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20987ce4-16e9-4364-9742-44454d336e33/nova-metadata-metadata/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.087087 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server-init/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.313265 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server-init/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.375705 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovs-vswitchd/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.459512 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.582567 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sjsvm_60184b1a-f656-4b71-bf13-2953f715bc12/ovn-controller/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.854270 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b4dd74b4-df0b-414c-ba61-5d428eb2f33e/openstack-network-exporter/0.log" Jan 26 10:21:39 crc kubenswrapper[4827]: I0126 10:21:39.859177 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xstrh_46064584-0d9c-4054-87ce-e417f22cd6ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:40 crc kubenswrapper[4827]: I0126 10:21:40.066468 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b4dd74b4-df0b-414c-ba61-5d428eb2f33e/ovn-northd/0.log" Jan 26 10:21:40 crc kubenswrapper[4827]: I0126 10:21:40.191058 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e/openstack-network-exporter/0.log" Jan 26 10:21:40 crc kubenswrapper[4827]: I0126 10:21:40.217264 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e/ovsdbserver-nb/0.log" Jan 26 10:21:40 crc kubenswrapper[4827]: I0126 10:21:40.672528 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f55a507a-514c-48de-a8e8-8a3ef3eef284/ovsdbserver-sb/0.log" Jan 26 10:21:40 crc kubenswrapper[4827]: I0126 10:21:40.689051 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f55a507a-514c-48de-a8e8-8a3ef3eef284/openstack-network-exporter/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.062928 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84fd67f47d-vt6sw_225ee5ae-fc10-4dd9-af29-0d227dd81802/placement-api/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.118910 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84fd67f47d-vt6sw_225ee5ae-fc10-4dd9-af29-0d227dd81802/placement-log/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.195616 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/setup-container/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.426255 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/rabbitmq/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.453423 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/setup-container/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.562904 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/setup-container/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.768450 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/setup-container/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.924540 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/rabbitmq/0.log" Jan 26 10:21:41 crc kubenswrapper[4827]: I0126 10:21:41.931467 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm_bb198638-c527-412e-96ae-d0cdc3c4abbd/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:42 crc kubenswrapper[4827]: I0126 10:21:42.538910 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl_e5c7854f-b129-4e2a-9af1-ce45d61e1ae2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:42 crc kubenswrapper[4827]: I0126 10:21:42.576806 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rkbpm_cf1f3c72-ea59-4949-aec9-51d06e078251/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:42 crc kubenswrapper[4827]: I0126 10:21:42.791405 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pjkdr_9ded20fe-a752-4b6f-94a3-b07079038103/ssh-known-hosts-edpm-deployment/0.log" Jan 26 10:21:42 crc kubenswrapper[4827]: I0126 10:21:42.830954 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a3afb0c8-7da0-4f91-a689-921ef566e7a2/tempest-tests-tempest-tests-runner/0.log" Jan 26 10:21:43 crc kubenswrapper[4827]: I0126 10:21:43.467918 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c074f00d-8c21-4bab-9019-138c164586fc/test-operator-logs-container/0.log" Jan 26 10:21:43 crc kubenswrapper[4827]: I0126 10:21:43.517761 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk_448918db-8118-4738-aeed-81ba5f247cbb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:21:49 crc kubenswrapper[4827]: I0126 10:21:49.703453 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:21:49 crc kubenswrapper[4827]: E0126 10:21:49.704131 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:21:56 crc kubenswrapper[4827]: I0126 10:21:56.355829 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_da6ed528-8ee6-421d-a921-a9b6d1382d45/memcached/0.log" Jan 26 10:22:03 crc kubenswrapper[4827]: I0126 10:22:03.702789 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:22:03 crc kubenswrapper[4827]: E0126 10:22:03.703385 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.065573 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.222270 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.266332 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.283796 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.474721 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.474890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/extract/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.532335 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.820936 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-82zp4_4b99eea5-fc5a-4441-8858-1a500c49c429/manager/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.834463 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7d95c_571aa666-d430-47aa-a48b-91b5a2555723/manager/0.log" Jan 26 10:22:16 crc kubenswrapper[4827]: I0126 10:22:16.969225 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-g47s2_90405ca9-cf52-4ad1-94b9-54aacb8e5708/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.063570 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-w42nm_3759f1d2-941a-496f-a51e-aa2bd6fbeeec/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.241240 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-f4pjj_52992458-b4f0-409b-8be0-96a545a80839/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.252484 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-hj2q8_86d77aba-3a0a-43d5-b592-2c45d866515c/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.447458 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-96nv5_9f1d37d2-59af-4a07-8d64-f1636eee3929/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.754227 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-skgxf_64d1c33b-eace-4919-be5d-463f9621036a/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.759890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ldvbb_84b85200-c9f6-4759-bb84-1513165fe742/manager/0.log" Jan 26 10:22:17 crc kubenswrapper[4827]: I0126 10:22:17.811187 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-tmb5m_7588c42e-08d0-4c2d-b62d-07fc7257cf8f/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.069066 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp_7fa19e2b-55c2-4e72-882a-eb4437b37c50/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.080363 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-5tq7r_58431f1d-bbf1-459c-9f79-39c94712b9d7/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.289026 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-9g9tb_e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.352087 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-l4gjk_c3b4b2f4-2b69-4c36-b967-27c70f7a5767/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.486153 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-848957f4b4lzc5x_87aea9ac-4117-4870-81a9-44adabc28383/manager/0.log" Jan 26 10:22:18 crc kubenswrapper[4827]: I0126 10:22:18.703824 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:22:18 crc kubenswrapper[4827]: E0126 10:22:18.704127 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:22:19 crc kubenswrapper[4827]: I0126 10:22:19.001606 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f6799c556-8bwdr_62ec7e94-ac44-47b3-8a19-d0b443a135d4/operator/0.log" Jan 26 10:22:19 crc kubenswrapper[4827]: I0126 10:22:19.436978 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ttbws_e1ce3819-36a2-4cc6-9942-e8881815e42e/registry-server/0.log" Jan 26 10:22:19 crc kubenswrapper[4827]: I0126 10:22:19.439615 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-vq7vj_565c65e3-ea09-4057-81de-381377042c19/manager/0.log" Jan 26 10:22:19 crc kubenswrapper[4827]: I0126 10:22:19.827911 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-rzc28_424c27d6-31d7-4a37-a7ef-c89099773070/manager/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.067845 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-58bb7_7eea6dea-82a0-4c66-a5a0-0b7d11878264/operator/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.236600 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65d46cfd44-jsnhm_8ba78edc-c408-4071-ac8f-432e12ebb708/manager/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.400563 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-fcj6p_12001a2b-7c86-41a4-ba17-a0d586aea6e5/manager/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.641675 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-mbt6s_eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b/manager/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.710922 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-9qw4q_1cb20984-f7df-4d0b-9434-86182d952bb1/manager/0.log" Jan 26 10:22:20 crc kubenswrapper[4827]: I0126 10:22:20.782107 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-cb96z_2e2bf61f-063e-4fa4-aa92-6c14ee83fc66/manager/0.log" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.573596 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:31 crc kubenswrapper[4827]: E0126 10:22:31.575343 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ad70d5-0382-4297-8dfa-b60b1f430645" containerName="container-00" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.575361 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ad70d5-0382-4297-8dfa-b60b1f430645" containerName="container-00" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.575606 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ad70d5-0382-4297-8dfa-b60b1f430645" containerName="container-00" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.577531 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.591331 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.705076 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.705274 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.705347 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxwk\" (UniqueName: \"kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.707774 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:22:31 crc kubenswrapper[4827]: E0126 10:22:31.708038 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.806784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.807179 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.807455 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.807678 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.807724 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsxwk\" (UniqueName: \"kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.838619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsxwk\" (UniqueName: \"kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk\") pod \"redhat-operators-ffr7k\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:31 crc kubenswrapper[4827]: I0126 10:22:31.900239 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:32 crc kubenswrapper[4827]: I0126 10:22:32.580877 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:33 crc kubenswrapper[4827]: I0126 10:22:33.409373 4827 generic.go:334] "Generic (PLEG): container finished" podID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerID="524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e" exitCode=0 Jan 26 10:22:33 crc kubenswrapper[4827]: I0126 10:22:33.409484 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerDied","Data":"524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e"} Jan 26 10:22:33 crc kubenswrapper[4827]: I0126 10:22:33.409702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerStarted","Data":"e3d6bf0c27cc4e37c57940632d2df64f5ad4220033df843ec946eb062f2a32e5"} Jan 26 10:22:34 crc kubenswrapper[4827]: I0126 10:22:34.418892 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerStarted","Data":"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c"} Jan 26 10:22:38 crc kubenswrapper[4827]: I0126 10:22:38.461233 4827 generic.go:334] "Generic (PLEG): container finished" podID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerID="571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c" exitCode=0 Jan 26 10:22:38 crc kubenswrapper[4827]: I0126 10:22:38.461331 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerDied","Data":"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c"} Jan 26 10:22:39 crc kubenswrapper[4827]: I0126 10:22:39.474151 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerStarted","Data":"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8"} Jan 26 10:22:39 crc kubenswrapper[4827]: I0126 10:22:39.498910 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ffr7k" podStartSLOduration=2.848900087 podStartE2EDuration="8.498891493s" podCreationTimestamp="2026-01-26 10:22:31 +0000 UTC" firstStartedPulling="2026-01-26 10:22:33.411305655 +0000 UTC m=+4582.059977474" lastFinishedPulling="2026-01-26 10:22:39.061297041 +0000 UTC m=+4587.709968880" observedRunningTime="2026-01-26 10:22:39.492709516 +0000 UTC m=+4588.141381335" watchObservedRunningTime="2026-01-26 10:22:39.498891493 +0000 UTC m=+4588.147563312" Jan 26 10:22:41 crc kubenswrapper[4827]: I0126 10:22:41.901344 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:41 crc kubenswrapper[4827]: I0126 10:22:41.901784 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:42 crc kubenswrapper[4827]: I0126 10:22:42.952885 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ffr7k" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="registry-server" probeResult="failure" output=< Jan 26 10:22:42 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:22:42 crc kubenswrapper[4827]: > Jan 26 10:22:44 crc kubenswrapper[4827]: I0126 10:22:44.311501 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n4rf7_00acaa94-9dfe-4d0f-9ea2-17870a8c1af5/control-plane-machine-set-operator/0.log" Jan 26 10:22:44 crc kubenswrapper[4827]: I0126 10:22:44.478388 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rtv5j_00f5a10b-1353-4060-a2b0-7cc7d9980817/kube-rbac-proxy/0.log" Jan 26 10:22:44 crc kubenswrapper[4827]: I0126 10:22:44.616600 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rtv5j_00f5a10b-1353-4060-a2b0-7cc7d9980817/machine-api-operator/0.log" Jan 26 10:22:45 crc kubenswrapper[4827]: I0126 10:22:45.703353 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:22:45 crc kubenswrapper[4827]: E0126 10:22:45.703711 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:22:51 crc kubenswrapper[4827]: I0126 10:22:51.963292 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:52 crc kubenswrapper[4827]: I0126 10:22:52.025836 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:52 crc kubenswrapper[4827]: I0126 10:22:52.200010 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:53 crc kubenswrapper[4827]: I0126 10:22:53.595378 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ffr7k" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="registry-server" containerID="cri-o://b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8" gracePeriod=2 Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.281898 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.406120 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content\") pod \"86cc77a9-f5ce-43ce-bc26-910778c04b25\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.406209 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsxwk\" (UniqueName: \"kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk\") pod \"86cc77a9-f5ce-43ce-bc26-910778c04b25\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.406403 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities\") pod \"86cc77a9-f5ce-43ce-bc26-910778c04b25\" (UID: \"86cc77a9-f5ce-43ce-bc26-910778c04b25\") " Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.407251 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities" (OuterVolumeSpecName: "utilities") pod "86cc77a9-f5ce-43ce-bc26-910778c04b25" (UID: "86cc77a9-f5ce-43ce-bc26-910778c04b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.415869 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk" (OuterVolumeSpecName: "kube-api-access-jsxwk") pod "86cc77a9-f5ce-43ce-bc26-910778c04b25" (UID: "86cc77a9-f5ce-43ce-bc26-910778c04b25"). InnerVolumeSpecName "kube-api-access-jsxwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.508701 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsxwk\" (UniqueName: \"kubernetes.io/projected/86cc77a9-f5ce-43ce-bc26-910778c04b25-kube-api-access-jsxwk\") on node \"crc\" DevicePath \"\"" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.508743 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.538466 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86cc77a9-f5ce-43ce-bc26-910778c04b25" (UID: "86cc77a9-f5ce-43ce-bc26-910778c04b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.607685 4827 generic.go:334] "Generic (PLEG): container finished" podID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerID="b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8" exitCode=0 Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.607736 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerDied","Data":"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8"} Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.607768 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffr7k" event={"ID":"86cc77a9-f5ce-43ce-bc26-910778c04b25","Type":"ContainerDied","Data":"e3d6bf0c27cc4e37c57940632d2df64f5ad4220033df843ec946eb062f2a32e5"} Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.607793 4827 scope.go:117] "RemoveContainer" containerID="b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.607969 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffr7k" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.613991 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86cc77a9-f5ce-43ce-bc26-910778c04b25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.652859 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.661793 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ffr7k"] Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.663307 4827 scope.go:117] "RemoveContainer" containerID="571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.691213 4827 scope.go:117] "RemoveContainer" containerID="524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.732667 4827 scope.go:117] "RemoveContainer" containerID="b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8" Jan 26 10:22:54 crc kubenswrapper[4827]: E0126 10:22:54.733480 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8\": container with ID starting with b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8 not found: ID does not exist" containerID="b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.733527 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8"} err="failed to get container status \"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8\": rpc error: code = NotFound desc = could not find container \"b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8\": container with ID starting with b5fc7096252563dea3021e706b1e121921b8f573a215de1c659d82d98790f5a8 not found: ID does not exist" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.733553 4827 scope.go:117] "RemoveContainer" containerID="571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c" Jan 26 10:22:54 crc kubenswrapper[4827]: E0126 10:22:54.733958 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c\": container with ID starting with 571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c not found: ID does not exist" containerID="571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.733995 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c"} err="failed to get container status \"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c\": rpc error: code = NotFound desc = could not find container \"571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c\": container with ID starting with 571bdab085ba24454fbac75d61c78c9cf615bb9fc429b7a0570a103ec2a3031c not found: ID does not exist" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.734019 4827 scope.go:117] "RemoveContainer" containerID="524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e" Jan 26 10:22:54 crc kubenswrapper[4827]: E0126 10:22:54.734338 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e\": container with ID starting with 524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e not found: ID does not exist" containerID="524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e" Jan 26 10:22:54 crc kubenswrapper[4827]: I0126 10:22:54.734376 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e"} err="failed to get container status \"524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e\": rpc error: code = NotFound desc = could not find container \"524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e\": container with ID starting with 524875cf764201304172bf40cf6e682fabc74cd0d0e24d536b629b3beb82823e not found: ID does not exist" Jan 26 10:22:55 crc kubenswrapper[4827]: I0126 10:22:55.716962 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" path="/var/lib/kubelet/pods/86cc77a9-f5ce-43ce-bc26-910778c04b25/volumes" Jan 26 10:22:59 crc kubenswrapper[4827]: I0126 10:22:59.703727 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:22:59 crc kubenswrapper[4827]: E0126 10:22:59.704523 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:23:00 crc kubenswrapper[4827]: I0126 10:23:00.342399 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-5ctth_4614716e-593c-44d6-b054-f33ad6966d0b/cert-manager-controller/0.log" Jan 26 10:23:00 crc kubenswrapper[4827]: I0126 10:23:00.593871 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lgxgj_287fb4fc-a4a8-4758-8c18-ea75f9590b1a/cert-manager-cainjector/0.log" Jan 26 10:23:00 crc kubenswrapper[4827]: I0126 10:23:00.754130 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pw552_3da6c3f3-a01b-4f14-9028-a7e371a518d4/cert-manager-webhook/0.log" Jan 26 10:23:10 crc kubenswrapper[4827]: I0126 10:23:10.703428 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:23:10 crc kubenswrapper[4827]: E0126 10:23:10.704341 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:23:13 crc kubenswrapper[4827]: I0126 10:23:13.670453 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cx5k7_7988cfe9-a182-49bf-b821-06d94fb81ec5/nmstate-console-plugin/0.log" Jan 26 10:23:13 crc kubenswrapper[4827]: I0126 10:23:13.900386 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbplf_63aaa24b-8f3f-426f-910b-65c0a0fa9429/nmstate-handler/0.log" Jan 26 10:23:13 crc kubenswrapper[4827]: I0126 10:23:13.983352 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wkvlg_fc6481c7-2911-4068-9e79-b44f492beda6/kube-rbac-proxy/0.log" Jan 26 10:23:13 crc kubenswrapper[4827]: I0126 10:23:13.986548 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wkvlg_fc6481c7-2911-4068-9e79-b44f492beda6/nmstate-metrics/0.log" Jan 26 10:23:14 crc kubenswrapper[4827]: I0126 10:23:14.105351 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-d5ltb_de78c189-6378-4709-8f64-c4ec5c433064/nmstate-operator/0.log" Jan 26 10:23:14 crc kubenswrapper[4827]: I0126 10:23:14.209342 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-vp6lz_c0175eb7-29d3-4293-ab63-f5db59a1092b/nmstate-webhook/0.log" Jan 26 10:23:25 crc kubenswrapper[4827]: I0126 10:23:25.706914 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:23:26 crc kubenswrapper[4827]: I0126 10:23:26.873987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb"} Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.542928 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:29 crc kubenswrapper[4827]: E0126 10:23:29.544114 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="extract-utilities" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.544133 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="extract-utilities" Jan 26 10:23:29 crc kubenswrapper[4827]: E0126 10:23:29.544158 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="extract-content" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.544167 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="extract-content" Jan 26 10:23:29 crc kubenswrapper[4827]: E0126 10:23:29.544199 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="registry-server" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.544207 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="registry-server" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.544438 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cc77a9-f5ce-43ce-bc26-910778c04b25" containerName="registry-server" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.546291 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.554254 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.598593 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.598687 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.598787 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzj4\" (UniqueName: \"kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.700655 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzj4\" (UniqueName: \"kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.700740 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.700789 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.701332 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.701492 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.730824 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzj4\" (UniqueName: \"kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4\") pod \"certified-operators-gpvh9\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:29 crc kubenswrapper[4827]: I0126 10:23:29.866217 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:30 crc kubenswrapper[4827]: I0126 10:23:30.425209 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:30 crc kubenswrapper[4827]: W0126 10:23:30.460487 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7004bf_b744_4b64_8dc9_1182c7a3a418.slice/crio-dd82b5b8bc004f64b7378e8c2a3b75f0cff5318efc5fb418031da3d384a1217d WatchSource:0}: Error finding container dd82b5b8bc004f64b7378e8c2a3b75f0cff5318efc5fb418031da3d384a1217d: Status 404 returned error can't find the container with id dd82b5b8bc004f64b7378e8c2a3b75f0cff5318efc5fb418031da3d384a1217d Jan 26 10:23:30 crc kubenswrapper[4827]: I0126 10:23:30.953930 4827 generic.go:334] "Generic (PLEG): container finished" podID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerID="8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba" exitCode=0 Jan 26 10:23:30 crc kubenswrapper[4827]: I0126 10:23:30.954161 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerDied","Data":"8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba"} Jan 26 10:23:30 crc kubenswrapper[4827]: I0126 10:23:30.954190 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerStarted","Data":"dd82b5b8bc004f64b7378e8c2a3b75f0cff5318efc5fb418031da3d384a1217d"} Jan 26 10:23:31 crc kubenswrapper[4827]: I0126 10:23:31.963583 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerStarted","Data":"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898"} Jan 26 10:23:32 crc kubenswrapper[4827]: I0126 10:23:32.973267 4827 generic.go:334] "Generic (PLEG): container finished" podID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerID="3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898" exitCode=0 Jan 26 10:23:32 crc kubenswrapper[4827]: I0126 10:23:32.973361 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerDied","Data":"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898"} Jan 26 10:23:33 crc kubenswrapper[4827]: I0126 10:23:33.984722 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerStarted","Data":"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26"} Jan 26 10:23:34 crc kubenswrapper[4827]: I0126 10:23:34.016545 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpvh9" podStartSLOduration=2.384669687 podStartE2EDuration="5.016513557s" podCreationTimestamp="2026-01-26 10:23:29 +0000 UTC" firstStartedPulling="2026-01-26 10:23:30.957748373 +0000 UTC m=+4639.606420192" lastFinishedPulling="2026-01-26 10:23:33.589592223 +0000 UTC m=+4642.238264062" observedRunningTime="2026-01-26 10:23:34.012301462 +0000 UTC m=+4642.660973281" watchObservedRunningTime="2026-01-26 10:23:34.016513557 +0000 UTC m=+4642.665185366" Jan 26 10:23:39 crc kubenswrapper[4827]: I0126 10:23:39.866430 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:39 crc kubenswrapper[4827]: I0126 10:23:39.866856 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:39 crc kubenswrapper[4827]: I0126 10:23:39.913836 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:40 crc kubenswrapper[4827]: I0126 10:23:40.102279 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:40 crc kubenswrapper[4827]: I0126 10:23:40.154491 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.044964 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpvh9" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="registry-server" containerID="cri-o://41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26" gracePeriod=2 Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.470029 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.548523 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzj4\" (UniqueName: \"kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4\") pod \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.548658 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities\") pod \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.548773 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content\") pod \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\" (UID: \"bd7004bf-b744-4b64-8dc9-1182c7a3a418\") " Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.552897 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities" (OuterVolumeSpecName: "utilities") pod "bd7004bf-b744-4b64-8dc9-1182c7a3a418" (UID: "bd7004bf-b744-4b64-8dc9-1182c7a3a418"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.563868 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4" (OuterVolumeSpecName: "kube-api-access-stzj4") pod "bd7004bf-b744-4b64-8dc9-1182c7a3a418" (UID: "bd7004bf-b744-4b64-8dc9-1182c7a3a418"). InnerVolumeSpecName "kube-api-access-stzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.600040 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd7004bf-b744-4b64-8dc9-1182c7a3a418" (UID: "bd7004bf-b744-4b64-8dc9-1182c7a3a418"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.650607 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.650647 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7004bf-b744-4b64-8dc9-1182c7a3a418-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:23:42 crc kubenswrapper[4827]: I0126 10:23:42.650657 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzj4\" (UniqueName: \"kubernetes.io/projected/bd7004bf-b744-4b64-8dc9-1182c7a3a418-kube-api-access-stzj4\") on node \"crc\" DevicePath \"\"" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.058608 4827 generic.go:334] "Generic (PLEG): container finished" podID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerID="41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26" exitCode=0 Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.061622 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpvh9" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.061839 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerDied","Data":"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26"} Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.061883 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpvh9" event={"ID":"bd7004bf-b744-4b64-8dc9-1182c7a3a418","Type":"ContainerDied","Data":"dd82b5b8bc004f64b7378e8c2a3b75f0cff5318efc5fb418031da3d384a1217d"} Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.061899 4827 scope.go:117] "RemoveContainer" containerID="41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.093515 4827 scope.go:117] "RemoveContainer" containerID="3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.119077 4827 scope.go:117] "RemoveContainer" containerID="8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.120331 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.133136 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpvh9"] Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.159605 4827 scope.go:117] "RemoveContainer" containerID="41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26" Jan 26 10:23:43 crc kubenswrapper[4827]: E0126 10:23:43.160174 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26\": container with ID starting with 41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26 not found: ID does not exist" containerID="41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.160214 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26"} err="failed to get container status \"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26\": rpc error: code = NotFound desc = could not find container \"41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26\": container with ID starting with 41c6a07e5fdc67c00e00964655d4872700e42e8b53168c120d5dff3ce6407e26 not found: ID does not exist" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.160239 4827 scope.go:117] "RemoveContainer" containerID="3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898" Jan 26 10:23:43 crc kubenswrapper[4827]: E0126 10:23:43.160676 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898\": container with ID starting with 3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898 not found: ID does not exist" containerID="3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.160717 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898"} err="failed to get container status \"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898\": rpc error: code = NotFound desc = could not find container \"3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898\": container with ID starting with 3b938fbb20382594101693722e60cfdbc7710773b42706889026374709969898 not found: ID does not exist" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.160738 4827 scope.go:117] "RemoveContainer" containerID="8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba" Jan 26 10:23:43 crc kubenswrapper[4827]: E0126 10:23:43.161100 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba\": container with ID starting with 8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba not found: ID does not exist" containerID="8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.161130 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba"} err="failed to get container status \"8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba\": rpc error: code = NotFound desc = could not find container \"8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba\": container with ID starting with 8b72e96b9621127c274446de6c8999b9e6deac7eb55500e75facf9d972a3bfba not found: ID does not exist" Jan 26 10:23:43 crc kubenswrapper[4827]: I0126 10:23:43.712680 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" path="/var/lib/kubelet/pods/bd7004bf-b744-4b64-8dc9-1182c7a3a418/volumes" Jan 26 10:23:44 crc kubenswrapper[4827]: I0126 10:23:44.650012 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zxxx8_0014db8e-0b1a-460c-b64e-bae6cdf0aaf0/kube-rbac-proxy/0.log" Jan 26 10:23:44 crc kubenswrapper[4827]: I0126 10:23:44.710531 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zxxx8_0014db8e-0b1a-460c-b64e-bae6cdf0aaf0/controller/0.log" Jan 26 10:23:44 crc kubenswrapper[4827]: I0126 10:23:44.907465 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8pczr_80d0ec40-8d37-43f1-93c8-8c970fba7072/frr-k8s-webhook-server/0.log" Jan 26 10:23:44 crc kubenswrapper[4827]: I0126 10:23:44.975121 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:23:45 crc kubenswrapper[4827]: I0126 10:23:45.162335 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:23:45 crc kubenswrapper[4827]: I0126 10:23:45.191111 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:23:45 crc kubenswrapper[4827]: I0126 10:23:45.209963 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:23:45 crc kubenswrapper[4827]: I0126 10:23:45.254420 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.023735 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.085818 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.340362 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.344143 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.563997 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.564142 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.568832 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/controller/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.596205 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.773467 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/kube-rbac-proxy/0.log" Jan 26 10:23:46 crc kubenswrapper[4827]: I0126 10:23:46.774399 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/frr-metrics/0.log" Jan 26 10:23:47 crc kubenswrapper[4827]: I0126 10:23:47.263694 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/reloader/0.log" Jan 26 10:23:47 crc kubenswrapper[4827]: I0126 10:23:47.282872 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/kube-rbac-proxy-frr/0.log" Jan 26 10:23:47 crc kubenswrapper[4827]: I0126 10:23:47.658320 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d6b7f6684-4h68b_538f5ce6-87b2-41eb-ad3a-92d274c88dbb/manager/0.log" Jan 26 10:23:47 crc kubenswrapper[4827]: I0126 10:23:47.688259 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b856d8997-9lj9h_619a25aa-4152-45b1-b27e-b1dd154b5738/webhook-server/0.log" Jan 26 10:23:47 crc kubenswrapper[4827]: I0126 10:23:47.952676 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rcbb_1f700d11-ba3a-4c81-8c29-237825f56448/kube-rbac-proxy/0.log" Jan 26 10:23:48 crc kubenswrapper[4827]: I0126 10:23:48.096715 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/frr/0.log" Jan 26 10:23:48 crc kubenswrapper[4827]: I0126 10:23:48.301884 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rcbb_1f700d11-ba3a-4c81-8c29-237825f56448/speaker/0.log" Jan 26 10:24:02 crc kubenswrapper[4827]: I0126 10:24:02.841369 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:24:02 crc kubenswrapper[4827]: I0126 10:24:02.982443 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:24:02 crc kubenswrapper[4827]: I0126 10:24:02.984297 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:24:02 crc kubenswrapper[4827]: I0126 10:24:02.987768 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.138887 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.240110 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.262091 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/extract/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.378584 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.651955 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.652410 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.684218 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.883622 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.925755 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/extract/0.log" Jan 26 10:24:03 crc kubenswrapper[4827]: I0126 10:24:03.936532 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.096266 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.280761 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.301982 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.343202 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.573978 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.601541 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:24:04 crc kubenswrapper[4827]: I0126 10:24:04.872364 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.182748 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/registry-server/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.305364 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.327959 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.371887 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.545041 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.551334 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:24:05 crc kubenswrapper[4827]: I0126 10:24:05.770613 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hnfv7_164c8367-04d2-44e4-b127-fe8b2a6b62e8/marketplace-operator/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.121473 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/registry-server/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.273132 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.556476 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.579190 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.602290 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.835180 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.835410 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:24:06 crc kubenswrapper[4827]: I0126 10:24:06.990557 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/registry-server/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.103147 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.261613 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.318032 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.319052 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.516541 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.580949 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:24:07 crc kubenswrapper[4827]: I0126 10:24:07.991010 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/registry-server/0.log" Jan 26 10:24:26 crc kubenswrapper[4827]: E0126 10:24:26.746687 4827 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.166:35830->38.102.83.166:35591: write tcp 38.102.83.166:35830->38.102.83.166:35591: write: broken pipe Jan 26 10:24:48 crc kubenswrapper[4827]: I0126 10:24:48.998072 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:24:48 crc kubenswrapper[4827]: E0126 10:24:48.998842 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="extract-utilities" Jan 26 10:24:48 crc kubenswrapper[4827]: I0126 10:24:48.998855 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="extract-utilities" Jan 26 10:24:48 crc kubenswrapper[4827]: E0126 10:24:48.998867 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="registry-server" Jan 26 10:24:48 crc kubenswrapper[4827]: I0126 10:24:48.998873 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="registry-server" Jan 26 10:24:48 crc kubenswrapper[4827]: E0126 10:24:48.998887 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="extract-content" Jan 26 10:24:48 crc kubenswrapper[4827]: I0126 10:24:48.998893 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="extract-content" Jan 26 10:24:48 crc kubenswrapper[4827]: I0126 10:24:48.999081 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7004bf-b744-4b64-8dc9-1182c7a3a418" containerName="registry-server" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.000255 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.022241 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.141555 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.141862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.142007 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9v9w\" (UniqueName: \"kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.244023 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.244099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.244160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9v9w\" (UniqueName: \"kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.245732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.245925 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.508608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9v9w\" (UniqueName: \"kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w\") pod \"redhat-marketplace-j4tpx\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:49 crc kubenswrapper[4827]: I0126 10:24:49.621334 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:50 crc kubenswrapper[4827]: I0126 10:24:50.207152 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:24:50 crc kubenswrapper[4827]: I0126 10:24:50.724434 4827 generic.go:334] "Generic (PLEG): container finished" podID="5407a4df-af15-4480-bcab-697059536be0" containerID="84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8" exitCode=0 Jan 26 10:24:50 crc kubenswrapper[4827]: I0126 10:24:50.724504 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerDied","Data":"84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8"} Jan 26 10:24:50 crc kubenswrapper[4827]: I0126 10:24:50.724767 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerStarted","Data":"e1fe40fb0bd8f0d8578d754755c8a2662585195cd448bd1a07bfe3d763ac1313"} Jan 26 10:24:50 crc kubenswrapper[4827]: I0126 10:24:50.728067 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:24:51 crc kubenswrapper[4827]: I0126 10:24:51.733999 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerStarted","Data":"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f"} Jan 26 10:24:52 crc kubenswrapper[4827]: I0126 10:24:52.744403 4827 generic.go:334] "Generic (PLEG): container finished" podID="5407a4df-af15-4480-bcab-697059536be0" containerID="5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f" exitCode=0 Jan 26 10:24:52 crc kubenswrapper[4827]: I0126 10:24:52.744506 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerDied","Data":"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f"} Jan 26 10:24:53 crc kubenswrapper[4827]: I0126 10:24:53.758188 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerStarted","Data":"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe"} Jan 26 10:24:59 crc kubenswrapper[4827]: I0126 10:24:59.623118 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:59 crc kubenswrapper[4827]: I0126 10:24:59.623632 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:59 crc kubenswrapper[4827]: I0126 10:24:59.679378 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:24:59 crc kubenswrapper[4827]: I0126 10:24:59.711317 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4tpx" podStartSLOduration=9.337052732 podStartE2EDuration="11.711267602s" podCreationTimestamp="2026-01-26 10:24:48 +0000 UTC" firstStartedPulling="2026-01-26 10:24:50.727469253 +0000 UTC m=+4719.376141082" lastFinishedPulling="2026-01-26 10:24:53.101684133 +0000 UTC m=+4721.750355952" observedRunningTime="2026-01-26 10:24:53.787607903 +0000 UTC m=+4722.436279732" watchObservedRunningTime="2026-01-26 10:24:59.711267602 +0000 UTC m=+4728.359939461" Jan 26 10:25:00 crc kubenswrapper[4827]: I0126 10:25:00.370753 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:25:00 crc kubenswrapper[4827]: I0126 10:25:00.435176 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:25:01 crc kubenswrapper[4827]: I0126 10:25:01.937305 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4tpx" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="registry-server" containerID="cri-o://79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe" gracePeriod=2 Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.585001 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.657094 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities\") pod \"5407a4df-af15-4480-bcab-697059536be0\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.657211 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9v9w\" (UniqueName: \"kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w\") pod \"5407a4df-af15-4480-bcab-697059536be0\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.657270 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content\") pod \"5407a4df-af15-4480-bcab-697059536be0\" (UID: \"5407a4df-af15-4480-bcab-697059536be0\") " Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.659553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities" (OuterVolumeSpecName: "utilities") pod "5407a4df-af15-4480-bcab-697059536be0" (UID: "5407a4df-af15-4480-bcab-697059536be0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.678541 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w" (OuterVolumeSpecName: "kube-api-access-j9v9w") pod "5407a4df-af15-4480-bcab-697059536be0" (UID: "5407a4df-af15-4480-bcab-697059536be0"). InnerVolumeSpecName "kube-api-access-j9v9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.683322 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5407a4df-af15-4480-bcab-697059536be0" (UID: "5407a4df-af15-4480-bcab-697059536be0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.763487 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.764108 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9v9w\" (UniqueName: \"kubernetes.io/projected/5407a4df-af15-4480-bcab-697059536be0-kube-api-access-j9v9w\") on node \"crc\" DevicePath \"\"" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.764194 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5407a4df-af15-4480-bcab-697059536be0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.949980 4827 generic.go:334] "Generic (PLEG): container finished" podID="5407a4df-af15-4480-bcab-697059536be0" containerID="79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe" exitCode=0 Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.950040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerDied","Data":"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe"} Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.950074 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4tpx" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.950097 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4tpx" event={"ID":"5407a4df-af15-4480-bcab-697059536be0","Type":"ContainerDied","Data":"e1fe40fb0bd8f0d8578d754755c8a2662585195cd448bd1a07bfe3d763ac1313"} Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.950122 4827 scope.go:117] "RemoveContainer" containerID="79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe" Jan 26 10:25:02 crc kubenswrapper[4827]: I0126 10:25:02.979094 4827 scope.go:117] "RemoveContainer" containerID="5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.012196 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.024210 4827 scope.go:117] "RemoveContainer" containerID="84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.035395 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4tpx"] Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.063179 4827 scope.go:117] "RemoveContainer" containerID="79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe" Jan 26 10:25:03 crc kubenswrapper[4827]: E0126 10:25:03.068661 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe\": container with ID starting with 79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe not found: ID does not exist" containerID="79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.068703 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe"} err="failed to get container status \"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe\": rpc error: code = NotFound desc = could not find container \"79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe\": container with ID starting with 79051df1a112b6458bf07914ad946e7b30722d27accabf9bf01b2c4d55eb4ffe not found: ID does not exist" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.068729 4827 scope.go:117] "RemoveContainer" containerID="5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f" Jan 26 10:25:03 crc kubenswrapper[4827]: E0126 10:25:03.069057 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f\": container with ID starting with 5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f not found: ID does not exist" containerID="5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.069081 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f"} err="failed to get container status \"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f\": rpc error: code = NotFound desc = could not find container \"5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f\": container with ID starting with 5036764bedbb6b78f54c07e599b6ec2e5a5a93af3f1d9d35ba10a0ab5038517f not found: ID does not exist" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.069101 4827 scope.go:117] "RemoveContainer" containerID="84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8" Jan 26 10:25:03 crc kubenswrapper[4827]: E0126 10:25:03.069550 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8\": container with ID starting with 84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8 not found: ID does not exist" containerID="84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.069590 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8"} err="failed to get container status \"84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8\": rpc error: code = NotFound desc = could not find container \"84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8\": container with ID starting with 84bee4f738e41cfec28fa6e8d57e32a48c64a32820e2a67a734e9233eebce1e8 not found: ID does not exist" Jan 26 10:25:03 crc kubenswrapper[4827]: I0126 10:25:03.718201 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5407a4df-af15-4480-bcab-697059536be0" path="/var/lib/kubelet/pods/5407a4df-af15-4480-bcab-697059536be0/volumes" Jan 26 10:25:42 crc kubenswrapper[4827]: I0126 10:25:42.443062 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:25:42 crc kubenswrapper[4827]: I0126 10:25:42.448003 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:26:12 crc kubenswrapper[4827]: I0126 10:26:12.268407 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:26:12 crc kubenswrapper[4827]: I0126 10:26:12.269009 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:26:35 crc kubenswrapper[4827]: E0126 10:26:35.518365 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5733bfc_a425_4a49_b57a_cb6e861764ab.slice/crio-a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0.scope\": RecentStats: unable to find data in memory cache]" Jan 26 10:26:35 crc kubenswrapper[4827]: I0126 10:26:35.861219 4827 generic.go:334] "Generic (PLEG): container finished" podID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerID="a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0" exitCode=0 Jan 26 10:26:35 crc kubenswrapper[4827]: I0126 10:26:35.861312 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tphk2/must-gather-82mc4" event={"ID":"d5733bfc-a425-4a49-b57a-cb6e861764ab","Type":"ContainerDied","Data":"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0"} Jan 26 10:26:35 crc kubenswrapper[4827]: I0126 10:26:35.863280 4827 scope.go:117] "RemoveContainer" containerID="a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0" Jan 26 10:26:36 crc kubenswrapper[4827]: I0126 10:26:36.556415 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tphk2_must-gather-82mc4_d5733bfc-a425-4a49-b57a-cb6e861764ab/gather/0.log" Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.269123 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.270004 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.270061 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.271123 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.271207 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb" gracePeriod=600 Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.931618 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb" exitCode=0 Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.931673 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb"} Jan 26 10:26:42 crc kubenswrapper[4827]: I0126 10:26:42.932317 4827 scope.go:117] "RemoveContainer" containerID="df21c6f8b52430d8443e3629fbc3d9d2c5fbc5649361261e05dcc0c76d4c56f8" Jan 26 10:26:43 crc kubenswrapper[4827]: I0126 10:26:43.944132 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68"} Jan 26 10:26:45 crc kubenswrapper[4827]: I0126 10:26:45.897135 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tphk2/must-gather-82mc4"] Jan 26 10:26:45 crc kubenswrapper[4827]: I0126 10:26:45.897766 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tphk2/must-gather-82mc4" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="copy" containerID="cri-o://4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e" gracePeriod=2 Jan 26 10:26:45 crc kubenswrapper[4827]: I0126 10:26:45.911343 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tphk2/must-gather-82mc4"] Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.419470 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tphk2_must-gather-82mc4_d5733bfc-a425-4a49-b57a-cb6e861764ab/copy/0.log" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.420249 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.516705 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output\") pod \"d5733bfc-a425-4a49-b57a-cb6e861764ab\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.516797 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlgs4\" (UniqueName: \"kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4\") pod \"d5733bfc-a425-4a49-b57a-cb6e861764ab\" (UID: \"d5733bfc-a425-4a49-b57a-cb6e861764ab\") " Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.523987 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4" (OuterVolumeSpecName: "kube-api-access-nlgs4") pod "d5733bfc-a425-4a49-b57a-cb6e861764ab" (UID: "d5733bfc-a425-4a49-b57a-cb6e861764ab"). InnerVolumeSpecName "kube-api-access-nlgs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.619262 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlgs4\" (UniqueName: \"kubernetes.io/projected/d5733bfc-a425-4a49-b57a-cb6e861764ab-kube-api-access-nlgs4\") on node \"crc\" DevicePath \"\"" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.686540 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5733bfc-a425-4a49-b57a-cb6e861764ab" (UID: "d5733bfc-a425-4a49-b57a-cb6e861764ab"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.721794 4827 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5733bfc-a425-4a49-b57a-cb6e861764ab-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.966627 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tphk2_must-gather-82mc4_d5733bfc-a425-4a49-b57a-cb6e861764ab/copy/0.log" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.967401 4827 generic.go:334] "Generic (PLEG): container finished" podID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerID="4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e" exitCode=143 Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.967445 4827 scope.go:117] "RemoveContainer" containerID="4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.967559 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tphk2/must-gather-82mc4" Jan 26 10:26:46 crc kubenswrapper[4827]: I0126 10:26:46.991902 4827 scope.go:117] "RemoveContainer" containerID="a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0" Jan 26 10:26:47 crc kubenswrapper[4827]: I0126 10:26:47.445725 4827 scope.go:117] "RemoveContainer" containerID="4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e" Jan 26 10:26:47 crc kubenswrapper[4827]: E0126 10:26:47.446265 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e\": container with ID starting with 4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e not found: ID does not exist" containerID="4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e" Jan 26 10:26:47 crc kubenswrapper[4827]: I0126 10:26:47.446407 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e"} err="failed to get container status \"4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e\": rpc error: code = NotFound desc = could not find container \"4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e\": container with ID starting with 4fd8069236b7dd67294b4d61b97c0800bb9ad7aec9d25cf4cf98ce5fb14f8e4e not found: ID does not exist" Jan 26 10:26:47 crc kubenswrapper[4827]: I0126 10:26:47.446507 4827 scope.go:117] "RemoveContainer" containerID="a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0" Jan 26 10:26:47 crc kubenswrapper[4827]: E0126 10:26:47.447941 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0\": container with ID starting with a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0 not found: ID does not exist" containerID="a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0" Jan 26 10:26:47 crc kubenswrapper[4827]: I0126 10:26:47.448082 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0"} err="failed to get container status \"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0\": rpc error: code = NotFound desc = could not find container \"a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0\": container with ID starting with a1150b0596e3f5a69b2f44bdcc2c6250830df675b6fafa0b3f3e8809b5401df0 not found: ID does not exist" Jan 26 10:26:48 crc kubenswrapper[4827]: I0126 10:26:48.684161 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" path="/var/lib/kubelet/pods/d5733bfc-a425-4a49-b57a-cb6e861764ab/volumes" Jan 26 10:28:42 crc kubenswrapper[4827]: I0126 10:28:42.268669 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:28:42 crc kubenswrapper[4827]: I0126 10:28:42.269831 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.636598 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:28:53 crc kubenswrapper[4827]: E0126 10:28:53.638137 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="extract-utilities" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638172 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="extract-utilities" Jan 26 10:28:53 crc kubenswrapper[4827]: E0126 10:28:53.638223 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="registry-server" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638264 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="registry-server" Jan 26 10:28:53 crc kubenswrapper[4827]: E0126 10:28:53.638295 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="gather" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638311 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="gather" Jan 26 10:28:53 crc kubenswrapper[4827]: E0126 10:28:53.638341 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="copy" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638358 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="copy" Jan 26 10:28:53 crc kubenswrapper[4827]: E0126 10:28:53.638388 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="extract-content" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638405 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="extract-content" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638881 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5407a4df-af15-4480-bcab-697059536be0" containerName="registry-server" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638929 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="gather" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.638983 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5733bfc-a425-4a49-b57a-cb6e861764ab" containerName="copy" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.642307 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.676446 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.795564 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.796059 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.796354 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkfsm\" (UniqueName: \"kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.897935 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkfsm\" (UniqueName: \"kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.898358 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.898546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.898929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.899081 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.917499 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkfsm\" (UniqueName: \"kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm\") pod \"community-operators-gdrz4\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:53 crc kubenswrapper[4827]: I0126 10:28:53.969602 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:28:54 crc kubenswrapper[4827]: I0126 10:28:54.517616 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:28:55 crc kubenswrapper[4827]: I0126 10:28:55.312802 4827 generic.go:334] "Generic (PLEG): container finished" podID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerID="e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f" exitCode=0 Jan 26 10:28:55 crc kubenswrapper[4827]: I0126 10:28:55.313066 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerDied","Data":"e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f"} Jan 26 10:28:55 crc kubenswrapper[4827]: I0126 10:28:55.313367 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerStarted","Data":"3f0ca63486157b48dcebd3980525ab25151987492981ea2606907fc8cb83ee10"} Jan 26 10:28:57 crc kubenswrapper[4827]: I0126 10:28:57.332364 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerStarted","Data":"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d"} Jan 26 10:28:58 crc kubenswrapper[4827]: I0126 10:28:58.343505 4827 generic.go:334] "Generic (PLEG): container finished" podID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerID="143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d" exitCode=0 Jan 26 10:28:58 crc kubenswrapper[4827]: I0126 10:28:58.343604 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerDied","Data":"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d"} Jan 26 10:28:59 crc kubenswrapper[4827]: I0126 10:28:59.352891 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerStarted","Data":"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e"} Jan 26 10:28:59 crc kubenswrapper[4827]: I0126 10:28:59.379448 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdrz4" podStartSLOduration=2.926143786 podStartE2EDuration="6.379430928s" podCreationTimestamp="2026-01-26 10:28:53 +0000 UTC" firstStartedPulling="2026-01-26 10:28:55.317285558 +0000 UTC m=+4963.965957377" lastFinishedPulling="2026-01-26 10:28:58.7705727 +0000 UTC m=+4967.419244519" observedRunningTime="2026-01-26 10:28:59.371189435 +0000 UTC m=+4968.019861254" watchObservedRunningTime="2026-01-26 10:28:59.379430928 +0000 UTC m=+4968.028102747" Jan 26 10:29:03 crc kubenswrapper[4827]: I0126 10:29:03.970545 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:03 crc kubenswrapper[4827]: I0126 10:29:03.971257 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:04 crc kubenswrapper[4827]: I0126 10:29:04.057212 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:05 crc kubenswrapper[4827]: I0126 10:29:05.243916 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:05 crc kubenswrapper[4827]: I0126 10:29:05.300809 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:29:06 crc kubenswrapper[4827]: I0126 10:29:06.406382 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gdrz4" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="registry-server" containerID="cri-o://c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e" gracePeriod=2 Jan 26 10:29:07 crc kubenswrapper[4827]: I0126 10:29:07.877106 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.003127 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content\") pod \"0a2653a0-abdb-4208-958f-039a35ddaa9a\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.003288 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkfsm\" (UniqueName: \"kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm\") pod \"0a2653a0-abdb-4208-958f-039a35ddaa9a\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.003320 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities\") pod \"0a2653a0-abdb-4208-958f-039a35ddaa9a\" (UID: \"0a2653a0-abdb-4208-958f-039a35ddaa9a\") " Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.004427 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities" (OuterVolumeSpecName: "utilities") pod "0a2653a0-abdb-4208-958f-039a35ddaa9a" (UID: "0a2653a0-abdb-4208-958f-039a35ddaa9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.010026 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm" (OuterVolumeSpecName: "kube-api-access-jkfsm") pod "0a2653a0-abdb-4208-958f-039a35ddaa9a" (UID: "0a2653a0-abdb-4208-958f-039a35ddaa9a"). InnerVolumeSpecName "kube-api-access-jkfsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.067235 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a2653a0-abdb-4208-958f-039a35ddaa9a" (UID: "0a2653a0-abdb-4208-958f-039a35ddaa9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.105657 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkfsm\" (UniqueName: \"kubernetes.io/projected/0a2653a0-abdb-4208-958f-039a35ddaa9a-kube-api-access-jkfsm\") on node \"crc\" DevicePath \"\"" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.105701 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.105714 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2653a0-abdb-4208-958f-039a35ddaa9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.439999 4827 generic.go:334] "Generic (PLEG): container finished" podID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerID="c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e" exitCode=0 Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.440073 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerDied","Data":"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e"} Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.440106 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrz4" event={"ID":"0a2653a0-abdb-4208-958f-039a35ddaa9a","Type":"ContainerDied","Data":"3f0ca63486157b48dcebd3980525ab25151987492981ea2606907fc8cb83ee10"} Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.440133 4827 scope.go:117] "RemoveContainer" containerID="c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.440360 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrz4" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.460736 4827 scope.go:117] "RemoveContainer" containerID="143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.488304 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.500857 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gdrz4"] Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.502045 4827 scope.go:117] "RemoveContainer" containerID="e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.545372 4827 scope.go:117] "RemoveContainer" containerID="c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e" Jan 26 10:29:08 crc kubenswrapper[4827]: E0126 10:29:08.546429 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e\": container with ID starting with c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e not found: ID does not exist" containerID="c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.546469 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e"} err="failed to get container status \"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e\": rpc error: code = NotFound desc = could not find container \"c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e\": container with ID starting with c9e6e64b09665c78c290044eb41bed35e85a26d7141181a328b93ce71cb9733e not found: ID does not exist" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.546496 4827 scope.go:117] "RemoveContainer" containerID="143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d" Jan 26 10:29:08 crc kubenswrapper[4827]: E0126 10:29:08.552240 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d\": container with ID starting with 143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d not found: ID does not exist" containerID="143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.552424 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d"} err="failed to get container status \"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d\": rpc error: code = NotFound desc = could not find container \"143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d\": container with ID starting with 143afd9c7f2f280e21387980490c2f29ad09a888b7481c161f3ce5f92c98535d not found: ID does not exist" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.552513 4827 scope.go:117] "RemoveContainer" containerID="e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f" Jan 26 10:29:08 crc kubenswrapper[4827]: E0126 10:29:08.553000 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f\": container with ID starting with e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f not found: ID does not exist" containerID="e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f" Jan 26 10:29:08 crc kubenswrapper[4827]: I0126 10:29:08.553045 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f"} err="failed to get container status \"e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f\": rpc error: code = NotFound desc = could not find container \"e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f\": container with ID starting with e4ee9f262575af94fc862c6114bee0e2002163fbf528b029adb276fc1f5c2f6f not found: ID does not exist" Jan 26 10:29:09 crc kubenswrapper[4827]: I0126 10:29:09.715140 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" path="/var/lib/kubelet/pods/0a2653a0-abdb-4208-958f-039a35ddaa9a/volumes" Jan 26 10:29:12 crc kubenswrapper[4827]: I0126 10:29:12.269053 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:29:12 crc kubenswrapper[4827]: I0126 10:29:12.269344 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.862011 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kqgh6/must-gather-rz4d5"] Jan 26 10:29:32 crc kubenswrapper[4827]: E0126 10:29:32.862793 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="registry-server" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.862806 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="registry-server" Jan 26 10:29:32 crc kubenswrapper[4827]: E0126 10:29:32.862820 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="extract-utilities" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.862826 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="extract-utilities" Jan 26 10:29:32 crc kubenswrapper[4827]: E0126 10:29:32.862845 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="extract-content" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.862851 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="extract-content" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.863022 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2653a0-abdb-4208-958f-039a35ddaa9a" containerName="registry-server" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.868803 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.875347 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kqgh6"/"default-dockercfg-tr7z9" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.875353 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kqgh6"/"kube-root-ca.crt" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.875350 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kqgh6"/"openshift-service-ca.crt" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.907025 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kqgh6/must-gather-rz4d5"] Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.934536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:32 crc kubenswrapper[4827]: I0126 10:29:32.934606 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45g9k\" (UniqueName: \"kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.036372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.036441 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45g9k\" (UniqueName: \"kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.037166 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.055761 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45g9k\" (UniqueName: \"kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k\") pod \"must-gather-rz4d5\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.185526 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.678389 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kqgh6/must-gather-rz4d5"] Jan 26 10:29:33 crc kubenswrapper[4827]: I0126 10:29:33.729140 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" event={"ID":"249f5c89-068c-4736-a6a2-29f200f4b201","Type":"ContainerStarted","Data":"9b97f93ff4dbf86f79f0d26007d1380c3ce81d868343fcf455b83323483ca11a"} Jan 26 10:29:34 crc kubenswrapper[4827]: I0126 10:29:34.736780 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" event={"ID":"249f5c89-068c-4736-a6a2-29f200f4b201","Type":"ContainerStarted","Data":"575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f"} Jan 26 10:29:34 crc kubenswrapper[4827]: I0126 10:29:34.737097 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" event={"ID":"249f5c89-068c-4736-a6a2-29f200f4b201","Type":"ContainerStarted","Data":"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725"} Jan 26 10:29:34 crc kubenswrapper[4827]: I0126 10:29:34.762980 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" podStartSLOduration=2.76295841 podStartE2EDuration="2.76295841s" podCreationTimestamp="2026-01-26 10:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:29:34.759372044 +0000 UTC m=+5003.408043873" watchObservedRunningTime="2026-01-26 10:29:34.76295841 +0000 UTC m=+5003.411630229" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.662441 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-r5vdg"] Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.663945 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.838662 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwgt\" (UniqueName: \"kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.838713 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.940230 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwgt\" (UniqueName: \"kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.940290 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.940474 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.969488 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwgt\" (UniqueName: \"kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt\") pod \"crc-debug-r5vdg\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:38 crc kubenswrapper[4827]: I0126 10:29:38.979259 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:29:39 crc kubenswrapper[4827]: W0126 10:29:39.006141 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bc7deea_5a5d_47e2_bc37_1b1443308909.slice/crio-28716378971b5a40d00c811c47b7be408e654e6e9b13bd8e610d9d2185eddc19 WatchSource:0}: Error finding container 28716378971b5a40d00c811c47b7be408e654e6e9b13bd8e610d9d2185eddc19: Status 404 returned error can't find the container with id 28716378971b5a40d00c811c47b7be408e654e6e9b13bd8e610d9d2185eddc19 Jan 26 10:29:39 crc kubenswrapper[4827]: I0126 10:29:39.791609 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" event={"ID":"8bc7deea-5a5d-47e2-bc37-1b1443308909","Type":"ContainerStarted","Data":"604c820c068085b595523ca09f69c688e245a7650e3a929a82c4601498c9d39d"} Jan 26 10:29:39 crc kubenswrapper[4827]: I0126 10:29:39.793067 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" event={"ID":"8bc7deea-5a5d-47e2-bc37-1b1443308909","Type":"ContainerStarted","Data":"28716378971b5a40d00c811c47b7be408e654e6e9b13bd8e610d9d2185eddc19"} Jan 26 10:29:39 crc kubenswrapper[4827]: I0126 10:29:39.813737 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" podStartSLOduration=1.813722598 podStartE2EDuration="1.813722598s" podCreationTimestamp="2026-01-26 10:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:29:39.813442911 +0000 UTC m=+5008.462114730" watchObservedRunningTime="2026-01-26 10:29:39.813722598 +0000 UTC m=+5008.462394417" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.269138 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.269413 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.269456 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.270291 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.270344 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" gracePeriod=600 Jan 26 10:29:42 crc kubenswrapper[4827]: E0126 10:29:42.396666 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.819480 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" exitCode=0 Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.819519 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68"} Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.819562 4827 scope.go:117] "RemoveContainer" containerID="9417286c39a40ba04744ca0faa225e0945bedb428ec5d6c260b418171d315ddb" Jan 26 10:29:42 crc kubenswrapper[4827]: I0126 10:29:42.820154 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:29:42 crc kubenswrapper[4827]: E0126 10:29:42.820474 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:29:55 crc kubenswrapper[4827]: I0126 10:29:55.702903 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:29:55 crc kubenswrapper[4827]: E0126 10:29:55.703665 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.161353 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8"] Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.163694 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.170417 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.171928 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8"] Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.173748 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.257317 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.257621 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5dh\" (UniqueName: \"kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.257704 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.373909 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5dh\" (UniqueName: \"kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.374088 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.374245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.375525 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.394048 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.406057 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5dh\" (UniqueName: \"kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh\") pod \"collect-profiles-29490390-lbhc8\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:00 crc kubenswrapper[4827]: I0126 10:30:00.487507 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:01 crc kubenswrapper[4827]: I0126 10:30:01.020325 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8"] Jan 26 10:30:01 crc kubenswrapper[4827]: I0126 10:30:01.964396 4827 generic.go:334] "Generic (PLEG): container finished" podID="d66a4de4-9d17-4c54-a657-815bd841ba07" containerID="d40e2500871a31653e75bae03a7f29cee7ac935d16c1b0f9715e99b80aa73a11" exitCode=0 Jan 26 10:30:01 crc kubenswrapper[4827]: I0126 10:30:01.964793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" event={"ID":"d66a4de4-9d17-4c54-a657-815bd841ba07","Type":"ContainerDied","Data":"d40e2500871a31653e75bae03a7f29cee7ac935d16c1b0f9715e99b80aa73a11"} Jan 26 10:30:01 crc kubenswrapper[4827]: I0126 10:30:01.964816 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" event={"ID":"d66a4de4-9d17-4c54-a657-815bd841ba07","Type":"ContainerStarted","Data":"bd6cc4e4c211c8c2640eb2196fd7a3d7f7e7636b4bb41df365428da7c878e5ff"} Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.285757 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.433966 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume\") pod \"d66a4de4-9d17-4c54-a657-815bd841ba07\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.434216 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume\") pod \"d66a4de4-9d17-4c54-a657-815bd841ba07\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.434331 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs5dh\" (UniqueName: \"kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh\") pod \"d66a4de4-9d17-4c54-a657-815bd841ba07\" (UID: \"d66a4de4-9d17-4c54-a657-815bd841ba07\") " Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.434736 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume" (OuterVolumeSpecName: "config-volume") pod "d66a4de4-9d17-4c54-a657-815bd841ba07" (UID: "d66a4de4-9d17-4c54-a657-815bd841ba07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.435200 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a4de4-9d17-4c54-a657-815bd841ba07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.439805 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh" (OuterVolumeSpecName: "kube-api-access-cs5dh") pod "d66a4de4-9d17-4c54-a657-815bd841ba07" (UID: "d66a4de4-9d17-4c54-a657-815bd841ba07"). InnerVolumeSpecName "kube-api-access-cs5dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.453795 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d66a4de4-9d17-4c54-a657-815bd841ba07" (UID: "d66a4de4-9d17-4c54-a657-815bd841ba07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.537311 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs5dh\" (UniqueName: \"kubernetes.io/projected/d66a4de4-9d17-4c54-a657-815bd841ba07-kube-api-access-cs5dh\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.537353 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d66a4de4-9d17-4c54-a657-815bd841ba07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.980908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" event={"ID":"d66a4de4-9d17-4c54-a657-815bd841ba07","Type":"ContainerDied","Data":"bd6cc4e4c211c8c2640eb2196fd7a3d7f7e7636b4bb41df365428da7c878e5ff"} Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.980943 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6cc4e4c211c8c2640eb2196fd7a3d7f7e7636b4bb41df365428da7c878e5ff" Jan 26 10:30:03 crc kubenswrapper[4827]: I0126 10:30:03.980978 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490390-lbhc8" Jan 26 10:30:04 crc kubenswrapper[4827]: I0126 10:30:04.362967 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg"] Jan 26 10:30:04 crc kubenswrapper[4827]: I0126 10:30:04.374397 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-8z2rg"] Jan 26 10:30:05 crc kubenswrapper[4827]: I0126 10:30:05.713679 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa65d1e5-d891-43e0-a7a4-77decb5e06ce" path="/var/lib/kubelet/pods/aa65d1e5-d891-43e0-a7a4-77decb5e06ce/volumes" Jan 26 10:30:09 crc kubenswrapper[4827]: I0126 10:30:09.707017 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:30:09 crc kubenswrapper[4827]: E0126 10:30:09.707759 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:30:13 crc kubenswrapper[4827]: I0126 10:30:13.062984 4827 generic.go:334] "Generic (PLEG): container finished" podID="8bc7deea-5a5d-47e2-bc37-1b1443308909" containerID="604c820c068085b595523ca09f69c688e245a7650e3a929a82c4601498c9d39d" exitCode=0 Jan 26 10:30:13 crc kubenswrapper[4827]: I0126 10:30:13.063081 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" event={"ID":"8bc7deea-5a5d-47e2-bc37-1b1443308909","Type":"ContainerDied","Data":"604c820c068085b595523ca09f69c688e245a7650e3a929a82c4601498c9d39d"} Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.191049 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.237620 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-r5vdg"] Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.252716 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-r5vdg"] Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.391004 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host\") pod \"8bc7deea-5a5d-47e2-bc37-1b1443308909\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.391101 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host" (OuterVolumeSpecName: "host") pod "8bc7deea-5a5d-47e2-bc37-1b1443308909" (UID: "8bc7deea-5a5d-47e2-bc37-1b1443308909"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.391160 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwwgt\" (UniqueName: \"kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt\") pod \"8bc7deea-5a5d-47e2-bc37-1b1443308909\" (UID: \"8bc7deea-5a5d-47e2-bc37-1b1443308909\") " Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.391874 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bc7deea-5a5d-47e2-bc37-1b1443308909-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.400967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt" (OuterVolumeSpecName: "kube-api-access-vwwgt") pod "8bc7deea-5a5d-47e2-bc37-1b1443308909" (UID: "8bc7deea-5a5d-47e2-bc37-1b1443308909"). InnerVolumeSpecName "kube-api-access-vwwgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:30:14 crc kubenswrapper[4827]: I0126 10:30:14.493525 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwwgt\" (UniqueName: \"kubernetes.io/projected/8bc7deea-5a5d-47e2-bc37-1b1443308909-kube-api-access-vwwgt\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.079950 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28716378971b5a40d00c811c47b7be408e654e6e9b13bd8e610d9d2185eddc19" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.080013 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-r5vdg" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.397975 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-6lm9t"] Jan 26 10:30:15 crc kubenswrapper[4827]: E0126 10:30:15.398313 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc7deea-5a5d-47e2-bc37-1b1443308909" containerName="container-00" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.398325 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc7deea-5a5d-47e2-bc37-1b1443308909" containerName="container-00" Jan 26 10:30:15 crc kubenswrapper[4827]: E0126 10:30:15.398346 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66a4de4-9d17-4c54-a657-815bd841ba07" containerName="collect-profiles" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.398351 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66a4de4-9d17-4c54-a657-815bd841ba07" containerName="collect-profiles" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.398506 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66a4de4-9d17-4c54-a657-815bd841ba07" containerName="collect-profiles" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.398529 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc7deea-5a5d-47e2-bc37-1b1443308909" containerName="container-00" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.399387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.416271 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6swv9\" (UniqueName: \"kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.416325 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.518065 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6swv9\" (UniqueName: \"kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.518134 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.518274 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.536341 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6swv9\" (UniqueName: \"kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9\") pod \"crc-debug-6lm9t\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.717847 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc7deea-5a5d-47e2-bc37-1b1443308909" path="/var/lib/kubelet/pods/8bc7deea-5a5d-47e2-bc37-1b1443308909/volumes" Jan 26 10:30:15 crc kubenswrapper[4827]: I0126 10:30:15.720996 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:15 crc kubenswrapper[4827]: W0126 10:30:15.744155 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb14b4813_fe24_46f6_af0b_bb4bef218f00.slice/crio-240510163fc91bd2a239e44bbaac28ab1c70e484c37e8d43cae0f91a66204292 WatchSource:0}: Error finding container 240510163fc91bd2a239e44bbaac28ab1c70e484c37e8d43cae0f91a66204292: Status 404 returned error can't find the container with id 240510163fc91bd2a239e44bbaac28ab1c70e484c37e8d43cae0f91a66204292 Jan 26 10:30:16 crc kubenswrapper[4827]: I0126 10:30:16.087473 4827 generic.go:334] "Generic (PLEG): container finished" podID="b14b4813-fe24-46f6-af0b-bb4bef218f00" containerID="e84e3981f25e4f8a44755cd0cfc5366f81d24e11ac7e0851619739694937b017" exitCode=0 Jan 26 10:30:16 crc kubenswrapper[4827]: I0126 10:30:16.087575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" event={"ID":"b14b4813-fe24-46f6-af0b-bb4bef218f00","Type":"ContainerDied","Data":"e84e3981f25e4f8a44755cd0cfc5366f81d24e11ac7e0851619739694937b017"} Jan 26 10:30:16 crc kubenswrapper[4827]: I0126 10:30:16.087769 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" event={"ID":"b14b4813-fe24-46f6-af0b-bb4bef218f00","Type":"ContainerStarted","Data":"240510163fc91bd2a239e44bbaac28ab1c70e484c37e8d43cae0f91a66204292"} Jan 26 10:30:16 crc kubenswrapper[4827]: I0126 10:30:16.503707 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-6lm9t"] Jan 26 10:30:16 crc kubenswrapper[4827]: I0126 10:30:16.511420 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-6lm9t"] Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.214106 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.248442 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host\") pod \"b14b4813-fe24-46f6-af0b-bb4bef218f00\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.248553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host" (OuterVolumeSpecName: "host") pod "b14b4813-fe24-46f6-af0b-bb4bef218f00" (UID: "b14b4813-fe24-46f6-af0b-bb4bef218f00"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.248671 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6swv9\" (UniqueName: \"kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9\") pod \"b14b4813-fe24-46f6-af0b-bb4bef218f00\" (UID: \"b14b4813-fe24-46f6-af0b-bb4bef218f00\") " Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.249306 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b14b4813-fe24-46f6-af0b-bb4bef218f00-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.254235 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9" (OuterVolumeSpecName: "kube-api-access-6swv9") pod "b14b4813-fe24-46f6-af0b-bb4bef218f00" (UID: "b14b4813-fe24-46f6-af0b-bb4bef218f00"). InnerVolumeSpecName "kube-api-access-6swv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.350854 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6swv9\" (UniqueName: \"kubernetes.io/projected/b14b4813-fe24-46f6-af0b-bb4bef218f00-kube-api-access-6swv9\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.715335 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14b4813-fe24-46f6-af0b-bb4bef218f00" path="/var/lib/kubelet/pods/b14b4813-fe24-46f6-af0b-bb4bef218f00/volumes" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.724699 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-t6dv6"] Jan 26 10:30:17 crc kubenswrapper[4827]: E0126 10:30:17.725221 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14b4813-fe24-46f6-af0b-bb4bef218f00" containerName="container-00" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.725245 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14b4813-fe24-46f6-af0b-bb4bef218f00" containerName="container-00" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.731954 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14b4813-fe24-46f6-af0b-bb4bef218f00" containerName="container-00" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.732803 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.766027 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jn7d\" (UniqueName: \"kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.767271 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.869881 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jn7d\" (UniqueName: \"kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.869926 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.870063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:17 crc kubenswrapper[4827]: I0126 10:30:17.886151 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jn7d\" (UniqueName: \"kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d\") pod \"crc-debug-t6dv6\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:18 crc kubenswrapper[4827]: I0126 10:30:18.054442 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:18 crc kubenswrapper[4827]: W0126 10:30:18.090095 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dc5c0d7_de70_47be_be90_4b2000ce0e51.slice/crio-c7a41f579f6be081476731a2fc167da5b15df5e0d0f60c5811a094f78352764e WatchSource:0}: Error finding container c7a41f579f6be081476731a2fc167da5b15df5e0d0f60c5811a094f78352764e: Status 404 returned error can't find the container with id c7a41f579f6be081476731a2fc167da5b15df5e0d0f60c5811a094f78352764e Jan 26 10:30:18 crc kubenswrapper[4827]: I0126 10:30:18.106298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" event={"ID":"0dc5c0d7-de70-47be-be90-4b2000ce0e51","Type":"ContainerStarted","Data":"c7a41f579f6be081476731a2fc167da5b15df5e0d0f60c5811a094f78352764e"} Jan 26 10:30:18 crc kubenswrapper[4827]: I0126 10:30:18.108224 4827 scope.go:117] "RemoveContainer" containerID="e84e3981f25e4f8a44755cd0cfc5366f81d24e11ac7e0851619739694937b017" Jan 26 10:30:18 crc kubenswrapper[4827]: I0126 10:30:18.108399 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-6lm9t" Jan 26 10:30:19 crc kubenswrapper[4827]: I0126 10:30:19.116742 4827 generic.go:334] "Generic (PLEG): container finished" podID="0dc5c0d7-de70-47be-be90-4b2000ce0e51" containerID="3dbd9a86608db5ff53cb779b8a67f4cd01da06a1cc5b7b0062bc3703148f6914" exitCode=0 Jan 26 10:30:19 crc kubenswrapper[4827]: I0126 10:30:19.116789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" event={"ID":"0dc5c0d7-de70-47be-be90-4b2000ce0e51","Type":"ContainerDied","Data":"3dbd9a86608db5ff53cb779b8a67f4cd01da06a1cc5b7b0062bc3703148f6914"} Jan 26 10:30:19 crc kubenswrapper[4827]: I0126 10:30:19.170424 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-t6dv6"] Jan 26 10:30:19 crc kubenswrapper[4827]: I0126 10:30:19.186174 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kqgh6/crc-debug-t6dv6"] Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.220756 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.300734 4827 scope.go:117] "RemoveContainer" containerID="fc130b88ccffaf3debb786f3e21cf52ef0f64753d036c015da4bc170bec7d8ad" Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.312790 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jn7d\" (UniqueName: \"kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d\") pod \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.312846 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host\") pod \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\" (UID: \"0dc5c0d7-de70-47be-be90-4b2000ce0e51\") " Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.313447 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host" (OuterVolumeSpecName: "host") pod "0dc5c0d7-de70-47be-be90-4b2000ce0e51" (UID: "0dc5c0d7-de70-47be-be90-4b2000ce0e51"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.345224 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d" (OuterVolumeSpecName: "kube-api-access-2jn7d") pod "0dc5c0d7-de70-47be-be90-4b2000ce0e51" (UID: "0dc5c0d7-de70-47be-be90-4b2000ce0e51"). InnerVolumeSpecName "kube-api-access-2jn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.415051 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jn7d\" (UniqueName: \"kubernetes.io/projected/0dc5c0d7-de70-47be-be90-4b2000ce0e51-kube-api-access-2jn7d\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:20 crc kubenswrapper[4827]: I0126 10:30:20.415082 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dc5c0d7-de70-47be-be90-4b2000ce0e51-host\") on node \"crc\" DevicePath \"\"" Jan 26 10:30:21 crc kubenswrapper[4827]: I0126 10:30:21.138349 4827 scope.go:117] "RemoveContainer" containerID="3dbd9a86608db5ff53cb779b8a67f4cd01da06a1cc5b7b0062bc3703148f6914" Jan 26 10:30:21 crc kubenswrapper[4827]: I0126 10:30:21.138715 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/crc-debug-t6dv6" Jan 26 10:30:21 crc kubenswrapper[4827]: I0126 10:30:21.712942 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:30:21 crc kubenswrapper[4827]: I0126 10:30:21.713601 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc5c0d7-de70-47be-be90-4b2000ce0e51" path="/var/lib/kubelet/pods/0dc5c0d7-de70-47be-be90-4b2000ce0e51/volumes" Jan 26 10:30:21 crc kubenswrapper[4827]: E0126 10:30:21.713652 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:30:36 crc kubenswrapper[4827]: I0126 10:30:36.703033 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:30:36 crc kubenswrapper[4827]: E0126 10:30:36.704335 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:30:48 crc kubenswrapper[4827]: I0126 10:30:48.704009 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:30:48 crc kubenswrapper[4827]: E0126 10:30:48.706411 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:30:49 crc kubenswrapper[4827]: I0126 10:30:49.807756 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3f89d129-88aa-4c87-ac49-33e52bd1cd4c" containerName="galera" probeResult="failure" output="command timed out" Jan 26 10:30:59 crc kubenswrapper[4827]: I0126 10:30:59.703756 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:30:59 crc kubenswrapper[4827]: E0126 10:30:59.704576 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:31:10 crc kubenswrapper[4827]: I0126 10:31:10.703445 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:31:10 crc kubenswrapper[4827]: E0126 10:31:10.704142 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:31:21 crc kubenswrapper[4827]: I0126 10:31:21.719848 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:31:21 crc kubenswrapper[4827]: E0126 10:31:21.720895 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:31:33 crc kubenswrapper[4827]: I0126 10:31:33.703563 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:31:33 crc kubenswrapper[4827]: E0126 10:31:33.704581 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:31:37 crc kubenswrapper[4827]: I0126 10:31:37.828700 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-549f46df88-ldq7r_149ae16b-d620-417f-a9df-0ff3864c7d08/barbican-api/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.014571 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-549f46df88-ldq7r_149ae16b-d620-417f-a9df-0ff3864c7d08/barbican-api-log/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.084923 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7665698578-xljwl_13b70f90-293b-4b38-be0f-0e5bde0c5e85/barbican-keystone-listener/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.094303 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7665698578-xljwl_13b70f90-293b-4b38-be0f-0e5bde0c5e85/barbican-keystone-listener-log/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.310073 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bb7c7c765-wmktj_ed6134dd-363a-49bb-99bb-6bac419c845a/barbican-worker/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.314782 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bb7c7c765-wmktj_ed6134dd-363a-49bb-99bb-6bac419c845a/barbican-worker-log/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.489778 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7xc2k_e2b5fccf-d108-4563-9e78-16e31b6959bf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.524673 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/ceilometer-central-agent/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.611412 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/ceilometer-notification-agent/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.766155 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/proxy-httpd/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.771040 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_788b1f32-de2c-4281-902f-63df02b00cd8/sg-core/0.log" Jan 26 10:31:38 crc kubenswrapper[4827]: I0126 10:31:38.871180 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-4vdff_9303c2b8-3943-40d2-b648-a7d24cf50214/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.011295 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp59k_c3096f06-9fdd-406d-9200-1fa4a2db5006/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.234898 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_07c0b8ae-368e-4e51-8686-6d5ce6def2a9/cinder-api-log/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.240472 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_07c0b8ae-368e-4e51-8686-6d5ce6def2a9/cinder-api/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.516399 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_98911844-c24c-42e7-bf54-ca3cfb5d77c5/cinder-backup/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.517890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_98911844-c24c-42e7-bf54-ca3cfb5d77c5/probe/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.607348 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_aabf5c90-5a50-4950-a417-ddf73a2fe2ce/cinder-scheduler/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.720259 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_aabf5c90-5a50-4950-a417-ddf73a2fe2ce/probe/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.854010 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4313f2bb-7f66-41c6-9c0e-87ae0d9eea08/probe/0.log" Jan 26 10:31:39 crc kubenswrapper[4827]: I0126 10:31:39.905933 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4313f2bb-7f66-41c6-9c0e-87ae0d9eea08/cinder-volume/0.log" Jan 26 10:31:40 crc kubenswrapper[4827]: I0126 10:31:40.009749 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-f6x26_dffac4b8-657b-40f3-86cc-6138f70d889b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:40 crc kubenswrapper[4827]: I0126 10:31:40.199718 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-x26cv_c44eb77f-7f7f-461f-b0aa-cbd347852699/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:40 crc kubenswrapper[4827]: I0126 10:31:40.348005 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/init/0.log" Jan 26 10:31:40 crc kubenswrapper[4827]: I0126 10:31:40.991437 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/init/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.117375 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d29d9533-48d6-4314-8bab-835c6804dcd6/glance-httpd/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.283127 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d29d9533-48d6-4314-8bab-835c6804dcd6/glance-log/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.302036 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-j4gzs_1d19f3d1-0a0f-47bf-8c31-c9b9da4c9006/dnsmasq-dns/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.491427 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_792cbe2a-cbf2-48f0-8eac-3c3d5b91538a/glance-httpd/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.568093 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_792cbe2a-cbf2-48f0-8eac-3c3d5b91538a/glance-log/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.859633 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8d4867b4-j5kkp_4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d/horizon/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.925906 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8d4867b4-j5kkp_4019ef6d-d9bb-4e2c-ad8a-d51a0ebbdb2d/horizon-log/0.log" Jan 26 10:31:41 crc kubenswrapper[4827]: I0126 10:31:41.974490 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qhqv7_c4931278-b623-46c2-8444-9a7b75093703/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:42 crc kubenswrapper[4827]: I0126 10:31:42.519658 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gffwb_e6ae452f-ac1c-44aa-b9cc-7f1a02d381c5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:42 crc kubenswrapper[4827]: I0126 10:31:42.747161 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bb549d74c-6hlgt_053973de-195d-44a8-ba9f-d665b8a53c87/keystone-api/0.log" Jan 26 10:31:42 crc kubenswrapper[4827]: I0126 10:31:42.834283 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490361-2sb62_5f315ab7-c066-4333-93bb-1e479301743a/keystone-cron/0.log" Jan 26 10:31:42 crc kubenswrapper[4827]: I0126 10:31:42.883166 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9d36e60e-5a78-4ce6-8997-688333022bc0/kube-state-metrics/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.039610 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gfc92_9f2e9aa2-d136-40ad-a382-41abb6ce645a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.136283 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5176c3b1-983f-4339-aa88-18ed0df10566/manila-api-log/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.297378 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5176c3b1-983f-4339-aa88-18ed0df10566/manila-api/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.372011 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4d9dc34-401b-43d6-97a0-c628eb57f517/probe/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.480798 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4d9dc34-401b-43d6-97a0-c628eb57f517/manila-scheduler/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.594035 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0/manila-share/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.640152 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6d8fa2f4-9fd4-4bc2-a1cf-16ce1e8781f0/probe/0.log" Jan 26 10:31:43 crc kubenswrapper[4827]: I0126 10:31:43.992894 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bdc4699d9-tnd4c_09595eb4-a10d-44f0-9aee-927389e0accb/neutron-api/0.log" Jan 26 10:31:44 crc kubenswrapper[4827]: I0126 10:31:44.029080 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bdc4699d9-tnd4c_09595eb4-a10d-44f0-9aee-927389e0accb/neutron-httpd/0.log" Jan 26 10:31:44 crc kubenswrapper[4827]: I0126 10:31:44.315839 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2b9q_fd1780e3-c584-4f74-b260-cec896594153/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:44 crc kubenswrapper[4827]: I0126 10:31:44.845604 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f509fce4-52e1-4f74-8cfa-cfe156852aed/nova-api-log/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.060707 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ef7d553f-5037-4ed5-9d99-c278f206381e/nova-cell0-conductor-conductor/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.430141 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f509fce4-52e1-4f74-8cfa-cfe156852aed/nova-api-api/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.528425 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a7163549-12a8-403d-b952-a03566f40771/nova-cell1-conductor-conductor/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.534854 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fd4d28c4-421a-40a9-8629-e832e0aa002f/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.896150 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-w2q9v_62a102b4-e915-4a42-a644-91624460cb06/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:45 crc kubenswrapper[4827]: I0126 10:31:45.923224 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20987ce4-16e9-4364-9742-44454d336e33/nova-metadata-log/0.log" Jan 26 10:31:46 crc kubenswrapper[4827]: I0126 10:31:46.379527 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/mysql-bootstrap/0.log" Jan 26 10:31:46 crc kubenswrapper[4827]: I0126 10:31:46.477173 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_21741788-081f-4f17-973d-ae145a0469ff/nova-scheduler-scheduler/0.log" Jan 26 10:31:46 crc kubenswrapper[4827]: I0126 10:31:46.648109 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/mysql-bootstrap/0.log" Jan 26 10:31:46 crc kubenswrapper[4827]: I0126 10:31:46.715204 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b1cad67f-3855-4463-980d-5372c7185eef/galera/0.log" Jan 26 10:31:46 crc kubenswrapper[4827]: I0126 10:31:46.915545 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/mysql-bootstrap/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.084031 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/mysql-bootstrap/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.100791 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3f89d129-88aa-4c87-ac49-33e52bd1cd4c/galera/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.304530 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a74b1cb5-e36a-49d0-b075-f3f269487645/openstackclient/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.498155 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9dxlf_0ff255bf-dcef-4418-ac66-802299400786/openstack-network-exporter/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.675416 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server-init/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.883506 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20987ce4-16e9-4364-9742-44454d336e33/nova-metadata-metadata/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.950272 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.956088 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovs-vswitchd/0.log" Jan 26 10:31:47 crc kubenswrapper[4827]: I0126 10:31:47.960522 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6jn8j_824497ea-421f-4928-83bd-908240595a4f/ovsdb-server-init/0.log" Jan 26 10:31:48 crc kubenswrapper[4827]: I0126 10:31:48.268823 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sjsvm_60184b1a-f656-4b71-bf13-2953f715bc12/ovn-controller/0.log" Jan 26 10:31:48 crc kubenswrapper[4827]: I0126 10:31:48.324210 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xstrh_46064584-0d9c-4054-87ce-e417f22cd6ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:48 crc kubenswrapper[4827]: I0126 10:31:48.703179 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:31:48 crc kubenswrapper[4827]: E0126 10:31:48.703848 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:31:49 crc kubenswrapper[4827]: I0126 10:31:49.491050 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b4dd74b4-df0b-414c-ba61-5d428eb2f33e/ovn-northd/0.log" Jan 26 10:31:49 crc kubenswrapper[4827]: I0126 10:31:49.505397 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b4dd74b4-df0b-414c-ba61-5d428eb2f33e/openstack-network-exporter/0.log" Jan 26 10:31:49 crc kubenswrapper[4827]: I0126 10:31:49.833932 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e/openstack-network-exporter/0.log" Jan 26 10:31:49 crc kubenswrapper[4827]: I0126 10:31:49.974935 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f3e64751-5ea0-49b6-b93f-5c9ac2b5c58e/ovsdbserver-nb/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.042562 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f55a507a-514c-48de-a8e8-8a3ef3eef284/openstack-network-exporter/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.072566 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f55a507a-514c-48de-a8e8-8a3ef3eef284/ovsdbserver-sb/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.814530 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/setup-container/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.819280 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84fd67f47d-vt6sw_225ee5ae-fc10-4dd9-af29-0d227dd81802/placement-log/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.913604 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84fd67f47d-vt6sw_225ee5ae-fc10-4dd9-af29-0d227dd81802/placement-api/0.log" Jan 26 10:31:50 crc kubenswrapper[4827]: I0126 10:31:50.994963 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/setup-container/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.113266 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1cc30a0-73e5-4ffe-97c4-37779ea46d78/rabbitmq/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.258242 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/setup-container/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.448564 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/setup-container/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.475603 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2d4c7e4-4f6a-402c-af73-84404c567c53/rabbitmq/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.543347 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r5ltm_bb198638-c527-412e-96ae-d0cdc3c4abbd/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.724429 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-66hfl_e5c7854f-b129-4e2a-9af1-ce45d61e1ae2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:51 crc kubenswrapper[4827]: I0126 10:31:51.898670 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rkbpm_cf1f3c72-ea59-4949-aec9-51d06e078251/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:31:52 crc kubenswrapper[4827]: I0126 10:31:52.133066 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pjkdr_9ded20fe-a752-4b6f-94a3-b07079038103/ssh-known-hosts-edpm-deployment/0.log" Jan 26 10:31:52 crc kubenswrapper[4827]: I0126 10:31:52.294142 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a3afb0c8-7da0-4f91-a689-921ef566e7a2/tempest-tests-tempest-tests-runner/0.log" Jan 26 10:31:52 crc kubenswrapper[4827]: I0126 10:31:52.309993 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c074f00d-8c21-4bab-9019-138c164586fc/test-operator-logs-container/0.log" Jan 26 10:31:52 crc kubenswrapper[4827]: I0126 10:31:52.502031 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6q5nk_448918db-8118-4738-aeed-81ba5f247cbb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 10:32:02 crc kubenswrapper[4827]: I0126 10:32:02.703786 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:32:02 crc kubenswrapper[4827]: E0126 10:32:02.706130 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:32:07 crc kubenswrapper[4827]: I0126 10:32:07.058890 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_da6ed528-8ee6-421d-a921-a9b6d1382d45/memcached/0.log" Jan 26 10:32:15 crc kubenswrapper[4827]: I0126 10:32:15.703087 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:32:15 crc kubenswrapper[4827]: E0126 10:32:15.704087 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.209626 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.445583 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.446211 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.448021 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.597715 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/util/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.648158 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/extract/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.702903 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:32:26 crc kubenswrapper[4827]: E0126 10:32:26.703353 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.715964 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c37904d3a17d1a14bd1b3c4a14453aa908ae663076c617ee6579657f6nbxs_386c499d-ff53-4460-a37a-60cd7a42f922/pull/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.848013 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-82zp4_4b99eea5-fc5a-4441-8858-1a500c49c429/manager/0.log" Jan 26 10:32:26 crc kubenswrapper[4827]: I0126 10:32:26.968539 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7d95c_571aa666-d430-47aa-a48b-91b5a2555723/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.084067 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-g47s2_90405ca9-cf52-4ad1-94b9-54aacb8e5708/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.258052 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-w42nm_3759f1d2-941a-496f-a51e-aa2bd6fbeeec/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.279152 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-f4pjj_52992458-b4f0-409b-8be0-96a545a80839/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.511713 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-hj2q8_86d77aba-3a0a-43d5-b592-2c45d866515c/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.676140 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-skgxf_64d1c33b-eace-4919-be5d-463f9621036a/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.804615 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-96nv5_9f1d37d2-59af-4a07-8d64-f1636eee3929/manager/0.log" Jan 26 10:32:27 crc kubenswrapper[4827]: I0126 10:32:27.919889 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ldvbb_84b85200-c9f6-4759-bb84-1513165fe742/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.079017 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-tmb5m_7588c42e-08d0-4c2d-b62d-07fc7257cf8f/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.170359 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-6jxsp_7fa19e2b-55c2-4e72-882a-eb4437b37c50/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.426211 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-5tq7r_58431f1d-bbf1-459c-9f79-39c94712b9d7/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.457340 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-9g9tb_e71a3bb9-358c-45fd-a8f8-7a6cfbb309b4/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.532727 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-l4gjk_c3b4b2f4-2b69-4c36-b967-27c70f7a5767/manager/0.log" Jan 26 10:32:28 crc kubenswrapper[4827]: I0126 10:32:28.702150 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-848957f4b4lzc5x_87aea9ac-4117-4870-81a9-44adabc28383/manager/0.log" Jan 26 10:32:29 crc kubenswrapper[4827]: I0126 10:32:29.082443 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f6799c556-8bwdr_62ec7e94-ac44-47b3-8a19-d0b443a135d4/operator/0.log" Jan 26 10:32:29 crc kubenswrapper[4827]: I0126 10:32:29.616458 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-vq7vj_565c65e3-ea09-4057-81de-381377042c19/manager/0.log" Jan 26 10:32:29 crc kubenswrapper[4827]: I0126 10:32:29.630297 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ttbws_e1ce3819-36a2-4cc6-9942-e8881815e42e/registry-server/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.066320 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-58bb7_7eea6dea-82a0-4c66-a5a0-0b7d11878264/operator/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.075577 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-rzc28_424c27d6-31d7-4a37-a7ef-c89099773070/manager/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.454502 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-fcj6p_12001a2b-7c86-41a4-ba17-a0d586aea6e5/manager/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.574666 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-9qw4q_1cb20984-f7df-4d0b-9434-86182d952bb1/manager/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.600566 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65d46cfd44-jsnhm_8ba78edc-c408-4071-ac8f-432e12ebb708/manager/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.722750 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-cb96z_2e2bf61f-063e-4fa4-aa92-6c14ee83fc66/manager/0.log" Jan 26 10:32:30 crc kubenswrapper[4827]: I0126 10:32:30.825432 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-mbt6s_eb04b18e-1dd4-4824-a2d2-dd49ce4dd24b/manager/0.log" Jan 26 10:32:38 crc kubenswrapper[4827]: I0126 10:32:38.703263 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:32:38 crc kubenswrapper[4827]: E0126 10:32:38.704088 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:32:53 crc kubenswrapper[4827]: I0126 10:32:53.703272 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:32:53 crc kubenswrapper[4827]: E0126 10:32:53.704928 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:32:56 crc kubenswrapper[4827]: I0126 10:32:56.881729 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n4rf7_00acaa94-9dfe-4d0f-9ea2-17870a8c1af5/control-plane-machine-set-operator/0.log" Jan 26 10:32:57 crc kubenswrapper[4827]: I0126 10:32:57.068414 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rtv5j_00f5a10b-1353-4060-a2b0-7cc7d9980817/machine-api-operator/0.log" Jan 26 10:32:57 crc kubenswrapper[4827]: I0126 10:32:57.096177 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rtv5j_00f5a10b-1353-4060-a2b0-7cc7d9980817/kube-rbac-proxy/0.log" Jan 26 10:33:05 crc kubenswrapper[4827]: I0126 10:33:05.703455 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:33:05 crc kubenswrapper[4827]: E0126 10:33:05.704201 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:33:12 crc kubenswrapper[4827]: I0126 10:33:12.865889 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-5ctth_4614716e-593c-44d6-b054-f33ad6966d0b/cert-manager-controller/0.log" Jan 26 10:33:13 crc kubenswrapper[4827]: I0126 10:33:13.195416 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lgxgj_287fb4fc-a4a8-4758-8c18-ea75f9590b1a/cert-manager-cainjector/0.log" Jan 26 10:33:13 crc kubenswrapper[4827]: I0126 10:33:13.218543 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pw552_3da6c3f3-a01b-4f14-9028-a7e371a518d4/cert-manager-webhook/0.log" Jan 26 10:33:16 crc kubenswrapper[4827]: I0126 10:33:16.703680 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:33:16 crc kubenswrapper[4827]: E0126 10:33:16.704472 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:33:27 crc kubenswrapper[4827]: I0126 10:33:27.703908 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:33:27 crc kubenswrapper[4827]: E0126 10:33:27.704624 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.200860 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cx5k7_7988cfe9-a182-49bf-b821-06d94fb81ec5/nmstate-console-plugin/0.log" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.387204 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbplf_63aaa24b-8f3f-426f-910b-65c0a0fa9429/nmstate-handler/0.log" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.432297 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wkvlg_fc6481c7-2911-4068-9e79-b44f492beda6/kube-rbac-proxy/0.log" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.637463 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wkvlg_fc6481c7-2911-4068-9e79-b44f492beda6/nmstate-metrics/0.log" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.744136 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-d5ltb_de78c189-6378-4709-8f64-c4ec5c433064/nmstate-operator/0.log" Jan 26 10:33:29 crc kubenswrapper[4827]: I0126 10:33:29.869907 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-vp6lz_c0175eb7-29d3-4293-ab63-f5db59a1092b/nmstate-webhook/0.log" Jan 26 10:33:40 crc kubenswrapper[4827]: I0126 10:33:40.703481 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:33:40 crc kubenswrapper[4827]: E0126 10:33:40.704184 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:33:54 crc kubenswrapper[4827]: I0126 10:33:54.703315 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:33:54 crc kubenswrapper[4827]: E0126 10:33:54.704075 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:34:04 crc kubenswrapper[4827]: I0126 10:34:04.543588 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zxxx8_0014db8e-0b1a-460c-b64e-bae6cdf0aaf0/kube-rbac-proxy/0.log" Jan 26 10:34:04 crc kubenswrapper[4827]: I0126 10:34:04.675734 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zxxx8_0014db8e-0b1a-460c-b64e-bae6cdf0aaf0/controller/0.log" Jan 26 10:34:04 crc kubenswrapper[4827]: I0126 10:34:04.881858 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8pczr_80d0ec40-8d37-43f1-93c8-8c970fba7072/frr-k8s-webhook-server/0.log" Jan 26 10:34:04 crc kubenswrapper[4827]: I0126 10:34:04.919041 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.455734 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.485974 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.529707 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.544846 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.702982 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:34:05 crc kubenswrapper[4827]: E0126 10:34:05.703328 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.753301 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.776268 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.811490 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.838045 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:34:05 crc kubenswrapper[4827]: I0126 10:34:05.998524 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-frr-files/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.013914 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-reloader/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.084427 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/cp-metrics/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.118098 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/controller/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.272826 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/frr-metrics/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.297361 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/kube-rbac-proxy/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.393792 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/kube-rbac-proxy-frr/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.669672 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/reloader/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.677833 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d6b7f6684-4h68b_538f5ce6-87b2-41eb-ad3a-92d274c88dbb/manager/0.log" Jan 26 10:34:06 crc kubenswrapper[4827]: I0126 10:34:06.896496 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b856d8997-9lj9h_619a25aa-4152-45b1-b27e-b1dd154b5738/webhook-server/0.log" Jan 26 10:34:07 crc kubenswrapper[4827]: I0126 10:34:07.097996 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rcbb_1f700d11-ba3a-4c81-8c29-237825f56448/kube-rbac-proxy/0.log" Jan 26 10:34:07 crc kubenswrapper[4827]: I0126 10:34:07.706810 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rcbb_1f700d11-ba3a-4c81-8c29-237825f56448/speaker/0.log" Jan 26 10:34:07 crc kubenswrapper[4827]: I0126 10:34:07.803271 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z5mhg_9d3cf333-fbf3-4b54-9f9b-a01cf98b9792/frr/0.log" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.006008 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:16 crc kubenswrapper[4827]: E0126 10:34:16.007816 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc5c0d7-de70-47be-be90-4b2000ce0e51" containerName="container-00" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.007897 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc5c0d7-de70-47be-be90-4b2000ce0e51" containerName="container-00" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.008122 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc5c0d7-de70-47be-be90-4b2000ce0e51" containerName="container-00" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.009368 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.045458 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.101892 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.102158 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.102419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jr4\" (UniqueName: \"kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.203661 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.203777 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jr4\" (UniqueName: \"kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.203823 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.207884 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.207986 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.227307 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jr4\" (UniqueName: \"kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4\") pod \"certified-operators-x4mhb\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.327340 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:16 crc kubenswrapper[4827]: I0126 10:34:16.931668 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:17 crc kubenswrapper[4827]: I0126 10:34:17.323853 4827 generic.go:334] "Generic (PLEG): container finished" podID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerID="9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7" exitCode=0 Jan 26 10:34:17 crc kubenswrapper[4827]: I0126 10:34:17.324402 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerDied","Data":"9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7"} Jan 26 10:34:17 crc kubenswrapper[4827]: I0126 10:34:17.324427 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerStarted","Data":"2d6930b842744cf6907720ff7900cc7fff33c439026672b19f3dc9a7c43d119c"} Jan 26 10:34:17 crc kubenswrapper[4827]: I0126 10:34:17.326551 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:34:18 crc kubenswrapper[4827]: I0126 10:34:18.333751 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerStarted","Data":"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20"} Jan 26 10:34:18 crc kubenswrapper[4827]: I0126 10:34:18.703833 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:34:18 crc kubenswrapper[4827]: E0126 10:34:18.704145 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:34:19 crc kubenswrapper[4827]: I0126 10:34:19.343118 4827 generic.go:334] "Generic (PLEG): container finished" podID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerID="34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20" exitCode=0 Jan 26 10:34:19 crc kubenswrapper[4827]: I0126 10:34:19.343176 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerDied","Data":"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20"} Jan 26 10:34:20 crc kubenswrapper[4827]: I0126 10:34:20.362702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerStarted","Data":"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102"} Jan 26 10:34:20 crc kubenswrapper[4827]: I0126 10:34:20.394665 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x4mhb" podStartSLOduration=2.975531257 podStartE2EDuration="5.394647529s" podCreationTimestamp="2026-01-26 10:34:15 +0000 UTC" firstStartedPulling="2026-01-26 10:34:17.326337076 +0000 UTC m=+5285.975008895" lastFinishedPulling="2026-01-26 10:34:19.745453348 +0000 UTC m=+5288.394125167" observedRunningTime="2026-01-26 10:34:20.392181381 +0000 UTC m=+5289.040853200" watchObservedRunningTime="2026-01-26 10:34:20.394647529 +0000 UTC m=+5289.043319348" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.128385 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.340528 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.342918 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.359353 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.642947 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/extract/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.647700 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/util/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.718342 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjsd8k_da2a9099-cc10-4968-887e-ca1d997b172c/pull/0.log" Jan 26 10:34:23 crc kubenswrapper[4827]: I0126 10:34:23.809623 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.010238 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.052070 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.059884 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.194545 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/util/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.242775 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/pull/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.279243 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kjr68_d9cafcc1-be7a-4449-b34d-8307959c4608/extract/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.442510 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.602011 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.673271 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.708374 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.919954 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-content/0.log" Jan 26 10:34:24 crc kubenswrapper[4827]: I0126 10:34:24.921193 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/extract-utilities/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.191686 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-utilities/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.534000 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-utilities/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.556087 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-content/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.558558 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-content/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.658153 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jcwnv_e3295c9e-728c-4747-ab65-ee52cd048562/registry-server/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.849999 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-utilities/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.852069 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/registry-server/0.log" Jan 26 10:34:25 crc kubenswrapper[4827]: I0126 10:34:25.857195 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x4mhb_050aaab0-4ede-4559-aa9d-1eb7bd44052c/extract-content/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.052766 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.214203 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.271181 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.279382 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.327440 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.327750 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.381676 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.466027 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.501573 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-utilities/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.577738 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/extract-content/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.627593 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.846401 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hnfv7_164c8367-04d2-44e4-b127-fe8b2a6b62e8/marketplace-operator/0.log" Jan 26 10:34:26 crc kubenswrapper[4827]: I0126 10:34:26.954111 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.099365 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.137612 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.282091 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.383178 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-md9m5_9381f7b0-db74-4848-b768-ee1071501178/registry-server/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.548817 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-utilities/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.641039 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/extract-content/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.682026 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fngdm_14e1c005-d10a-430f-881a-a222cd695727/registry-server/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.788274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.924119 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.948614 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:34:27 crc kubenswrapper[4827]: I0126 10:34:27.949266 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:34:28 crc kubenswrapper[4827]: I0126 10:34:28.135654 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-content/0.log" Jan 26 10:34:28 crc kubenswrapper[4827]: I0126 10:34:28.195713 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/extract-utilities/0.log" Jan 26 10:34:28 crc kubenswrapper[4827]: I0126 10:34:28.430023 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x4mhb" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="registry-server" containerID="cri-o://fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102" gracePeriod=2 Jan 26 10:34:28 crc kubenswrapper[4827]: I0126 10:34:28.767179 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9f9b_ef155af2-e9c9-45d6-8ea9-19ca71f455d1/registry-server/0.log" Jan 26 10:34:28 crc kubenswrapper[4827]: I0126 10:34:28.972948 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.155056 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities\") pod \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.155455 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64jr4\" (UniqueName: \"kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4\") pod \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.155507 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content\") pod \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\" (UID: \"050aaab0-4ede-4559-aa9d-1eb7bd44052c\") " Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.159319 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities" (OuterVolumeSpecName: "utilities") pod "050aaab0-4ede-4559-aa9d-1eb7bd44052c" (UID: "050aaab0-4ede-4559-aa9d-1eb7bd44052c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.169040 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4" (OuterVolumeSpecName: "kube-api-access-64jr4") pod "050aaab0-4ede-4559-aa9d-1eb7bd44052c" (UID: "050aaab0-4ede-4559-aa9d-1eb7bd44052c"). InnerVolumeSpecName "kube-api-access-64jr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.199026 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "050aaab0-4ede-4559-aa9d-1eb7bd44052c" (UID: "050aaab0-4ede-4559-aa9d-1eb7bd44052c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.258348 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.258396 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64jr4\" (UniqueName: \"kubernetes.io/projected/050aaab0-4ede-4559-aa9d-1eb7bd44052c-kube-api-access-64jr4\") on node \"crc\" DevicePath \"\"" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.258409 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050aaab0-4ede-4559-aa9d-1eb7bd44052c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.438904 4827 generic.go:334] "Generic (PLEG): container finished" podID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerID="fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102" exitCode=0 Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.438956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerDied","Data":"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102"} Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.438994 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4mhb" event={"ID":"050aaab0-4ede-4559-aa9d-1eb7bd44052c","Type":"ContainerDied","Data":"2d6930b842744cf6907720ff7900cc7fff33c439026672b19f3dc9a7c43d119c"} Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.438990 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4mhb" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.439019 4827 scope.go:117] "RemoveContainer" containerID="fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.466740 4827 scope.go:117] "RemoveContainer" containerID="34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.488981 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.504835 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x4mhb"] Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.506839 4827 scope.go:117] "RemoveContainer" containerID="9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.541183 4827 scope.go:117] "RemoveContainer" containerID="fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102" Jan 26 10:34:29 crc kubenswrapper[4827]: E0126 10:34:29.541999 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102\": container with ID starting with fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102 not found: ID does not exist" containerID="fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.542028 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102"} err="failed to get container status \"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102\": rpc error: code = NotFound desc = could not find container \"fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102\": container with ID starting with fef9e0b0e01d0764603912c81e3eacfb084fca2d290140f865a62b12ec0e9102 not found: ID does not exist" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.542049 4827 scope.go:117] "RemoveContainer" containerID="34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20" Jan 26 10:34:29 crc kubenswrapper[4827]: E0126 10:34:29.542299 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20\": container with ID starting with 34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20 not found: ID does not exist" containerID="34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.542327 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20"} err="failed to get container status \"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20\": rpc error: code = NotFound desc = could not find container \"34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20\": container with ID starting with 34527fdfcf0727c56827a1f3ea314d6deb6ac898b68885f212959793c6bdde20 not found: ID does not exist" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.542340 4827 scope.go:117] "RemoveContainer" containerID="9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7" Jan 26 10:34:29 crc kubenswrapper[4827]: E0126 10:34:29.542697 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7\": container with ID starting with 9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7 not found: ID does not exist" containerID="9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.542722 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7"} err="failed to get container status \"9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7\": rpc error: code = NotFound desc = could not find container \"9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7\": container with ID starting with 9980daf883f6406b170a41997adc6cc6c54d004712ee5fbbf0ee9ba3857f47b7 not found: ID does not exist" Jan 26 10:34:29 crc kubenswrapper[4827]: I0126 10:34:29.713767 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" path="/var/lib/kubelet/pods/050aaab0-4ede-4559-aa9d-1eb7bd44052c/volumes" Jan 26 10:34:31 crc kubenswrapper[4827]: I0126 10:34:31.718205 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:34:31 crc kubenswrapper[4827]: E0126 10:34:31.719767 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9x8x_openshift-machine-config-operator(ef39dc20-499c-4665-9555-481361ffe06d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" Jan 26 10:34:42 crc kubenswrapper[4827]: I0126 10:34:42.769492 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:34:43 crc kubenswrapper[4827]: I0126 10:34:43.808835 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"0d486373c12cc9aceb8fe5211fbe2735239f26e572f0a98f65e78d07d605e632"} Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.848522 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:35:59 crc kubenswrapper[4827]: E0126 10:35:59.849697 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="registry-server" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.849713 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="registry-server" Jan 26 10:35:59 crc kubenswrapper[4827]: E0126 10:35:59.849726 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="extract-utilities" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.849735 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="extract-utilities" Jan 26 10:35:59 crc kubenswrapper[4827]: E0126 10:35:59.849748 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="extract-content" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.849756 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="extract-content" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.849974 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="050aaab0-4ede-4559-aa9d-1eb7bd44052c" containerName="registry-server" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.851536 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.860816 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.964494 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.964541 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhz7\" (UniqueName: \"kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:35:59 crc kubenswrapper[4827]: I0126 10:35:59.964570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.065960 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.066005 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhz7\" (UniqueName: \"kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.066030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.066450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.066568 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.088470 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhz7\" (UniqueName: \"kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7\") pod \"redhat-marketplace-2t282\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.183114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:00 crc kubenswrapper[4827]: I0126 10:36:00.786476 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:36:01 crc kubenswrapper[4827]: I0126 10:36:01.564091 4827 generic.go:334] "Generic (PLEG): container finished" podID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerID="d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b" exitCode=0 Jan 26 10:36:01 crc kubenswrapper[4827]: I0126 10:36:01.564233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerDied","Data":"d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b"} Jan 26 10:36:01 crc kubenswrapper[4827]: I0126 10:36:01.564534 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerStarted","Data":"1a8abed2f9b36d244917e3771e708689e7c7e3a85b434abf6b538a94172b862c"} Jan 26 10:36:02 crc kubenswrapper[4827]: I0126 10:36:02.576846 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerStarted","Data":"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194"} Jan 26 10:36:03 crc kubenswrapper[4827]: I0126 10:36:03.597477 4827 generic.go:334] "Generic (PLEG): container finished" podID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerID="d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194" exitCode=0 Jan 26 10:36:03 crc kubenswrapper[4827]: I0126 10:36:03.597540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerDied","Data":"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194"} Jan 26 10:36:04 crc kubenswrapper[4827]: I0126 10:36:04.607184 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerStarted","Data":"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026"} Jan 26 10:36:04 crc kubenswrapper[4827]: I0126 10:36:04.630333 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2t282" podStartSLOduration=3.139393031 podStartE2EDuration="5.630314567s" podCreationTimestamp="2026-01-26 10:35:59 +0000 UTC" firstStartedPulling="2026-01-26 10:36:01.565806168 +0000 UTC m=+5390.214477987" lastFinishedPulling="2026-01-26 10:36:04.056727654 +0000 UTC m=+5392.705399523" observedRunningTime="2026-01-26 10:36:04.6213091 +0000 UTC m=+5393.269980919" watchObservedRunningTime="2026-01-26 10:36:04.630314567 +0000 UTC m=+5393.278986386" Jan 26 10:36:10 crc kubenswrapper[4827]: I0126 10:36:10.184028 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:10 crc kubenswrapper[4827]: I0126 10:36:10.185455 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:10 crc kubenswrapper[4827]: I0126 10:36:10.233766 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:10 crc kubenswrapper[4827]: I0126 10:36:10.749666 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:10 crc kubenswrapper[4827]: I0126 10:36:10.817838 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:36:12 crc kubenswrapper[4827]: I0126 10:36:12.674109 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2t282" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="registry-server" containerID="cri-o://00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026" gracePeriod=2 Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.113381 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.280708 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content\") pod \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.280858 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities\") pod \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.280931 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdhz7\" (UniqueName: \"kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7\") pod \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\" (UID: \"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140\") " Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.283591 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities" (OuterVolumeSpecName: "utilities") pod "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" (UID: "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.293244 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7" (OuterVolumeSpecName: "kube-api-access-bdhz7") pod "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" (UID: "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140"). InnerVolumeSpecName "kube-api-access-bdhz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.313252 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" (UID: "cf17fe3f-ac36-4f9c-a65c-b6f0551c3140"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.383515 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.383545 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdhz7\" (UniqueName: \"kubernetes.io/projected/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-kube-api-access-bdhz7\") on node \"crc\" DevicePath \"\"" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.383556 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.693502 4827 generic.go:334] "Generic (PLEG): container finished" podID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerID="00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026" exitCode=0 Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.693571 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerDied","Data":"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026"} Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.693596 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2t282" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.693611 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2t282" event={"ID":"cf17fe3f-ac36-4f9c-a65c-b6f0551c3140","Type":"ContainerDied","Data":"1a8abed2f9b36d244917e3771e708689e7c7e3a85b434abf6b538a94172b862c"} Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.693669 4827 scope.go:117] "RemoveContainer" containerID="00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.732087 4827 scope.go:117] "RemoveContainer" containerID="d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.776959 4827 scope.go:117] "RemoveContainer" containerID="d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.777918 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.789341 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2t282"] Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.821618 4827 scope.go:117] "RemoveContainer" containerID="00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026" Jan 26 10:36:13 crc kubenswrapper[4827]: E0126 10:36:13.822124 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026\": container with ID starting with 00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026 not found: ID does not exist" containerID="00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.822156 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026"} err="failed to get container status \"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026\": rpc error: code = NotFound desc = could not find container \"00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026\": container with ID starting with 00448cf7347de019024f971c924685cfbb9b021a6ebd090d968e61715a2a3026 not found: ID does not exist" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.822205 4827 scope.go:117] "RemoveContainer" containerID="d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194" Jan 26 10:36:13 crc kubenswrapper[4827]: E0126 10:36:13.822435 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194\": container with ID starting with d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194 not found: ID does not exist" containerID="d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.822481 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194"} err="failed to get container status \"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194\": rpc error: code = NotFound desc = could not find container \"d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194\": container with ID starting with d4fd286353226f5e18cba21926cd6c96fa152b39838c45c5fe3ed8cac822f194 not found: ID does not exist" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.822504 4827 scope.go:117] "RemoveContainer" containerID="d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b" Jan 26 10:36:13 crc kubenswrapper[4827]: E0126 10:36:13.822749 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b\": container with ID starting with d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b not found: ID does not exist" containerID="d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b" Jan 26 10:36:13 crc kubenswrapper[4827]: I0126 10:36:13.822777 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b"} err="failed to get container status \"d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b\": rpc error: code = NotFound desc = could not find container \"d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b\": container with ID starting with d50f47a6628e42eb81784abd7639214cddda45528f3d1621363e171a1e2c9b8b not found: ID does not exist" Jan 26 10:36:15 crc kubenswrapper[4827]: I0126 10:36:15.714989 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" path="/var/lib/kubelet/pods/cf17fe3f-ac36-4f9c-a65c-b6f0551c3140/volumes" Jan 26 10:36:20 crc kubenswrapper[4827]: I0126 10:36:20.575296 4827 scope.go:117] "RemoveContainer" containerID="604c820c068085b595523ca09f69c688e245a7650e3a929a82c4601498c9d39d" Jan 26 10:36:42 crc kubenswrapper[4827]: I0126 10:36:42.268796 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:36:42 crc kubenswrapper[4827]: I0126 10:36:42.269405 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:36:48 crc kubenswrapper[4827]: I0126 10:36:48.059540 4827 generic.go:334] "Generic (PLEG): container finished" podID="249f5c89-068c-4736-a6a2-29f200f4b201" containerID="2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725" exitCode=0 Jan 26 10:36:48 crc kubenswrapper[4827]: I0126 10:36:48.059918 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" event={"ID":"249f5c89-068c-4736-a6a2-29f200f4b201","Type":"ContainerDied","Data":"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725"} Jan 26 10:36:48 crc kubenswrapper[4827]: I0126 10:36:48.061520 4827 scope.go:117] "RemoveContainer" containerID="2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725" Jan 26 10:36:48 crc kubenswrapper[4827]: I0126 10:36:48.148997 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kqgh6_must-gather-rz4d5_249f5c89-068c-4736-a6a2-29f200f4b201/gather/0.log" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.517138 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:36:51 crc kubenswrapper[4827]: E0126 10:36:51.517890 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="registry-server" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.517901 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="registry-server" Jan 26 10:36:51 crc kubenswrapper[4827]: E0126 10:36:51.517923 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="extract-utilities" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.517929 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="extract-utilities" Jan 26 10:36:51 crc kubenswrapper[4827]: E0126 10:36:51.517942 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="extract-content" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.517948 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="extract-content" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.518095 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf17fe3f-ac36-4f9c-a65c-b6f0551c3140" containerName="registry-server" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.519276 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.531678 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.563580 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.563778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2hl\" (UniqueName: \"kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.563821 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.666377 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.666583 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2hl\" (UniqueName: \"kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.666616 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.666977 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.667397 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.692989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2hl\" (UniqueName: \"kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl\") pod \"redhat-operators-k246p\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:51 crc kubenswrapper[4827]: I0126 10:36:51.864030 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:36:52 crc kubenswrapper[4827]: I0126 10:36:52.452231 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:36:53 crc kubenswrapper[4827]: I0126 10:36:53.104623 4827 generic.go:334] "Generic (PLEG): container finished" podID="e53389c5-fdff-46bb-b086-db7873bc8e0b" containerID="325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8" exitCode=0 Jan 26 10:36:53 crc kubenswrapper[4827]: I0126 10:36:53.104728 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerDied","Data":"325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8"} Jan 26 10:36:53 crc kubenswrapper[4827]: I0126 10:36:53.104755 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerStarted","Data":"16f9edd4762bb3e22235d62ae2018ae749c3ccc8fbcffea9a1b0562ce0a0e5a4"} Jan 26 10:36:54 crc kubenswrapper[4827]: I0126 10:36:54.124326 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerStarted","Data":"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1"} Jan 26 10:36:58 crc kubenswrapper[4827]: I0126 10:36:58.155004 4827 generic.go:334] "Generic (PLEG): container finished" podID="e53389c5-fdff-46bb-b086-db7873bc8e0b" containerID="4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1" exitCode=0 Jan 26 10:36:58 crc kubenswrapper[4827]: I0126 10:36:58.155067 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerDied","Data":"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1"} Jan 26 10:36:59 crc kubenswrapper[4827]: I0126 10:36:59.166498 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerStarted","Data":"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1"} Jan 26 10:36:59 crc kubenswrapper[4827]: I0126 10:36:59.196138 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k246p" podStartSLOduration=2.7241548 podStartE2EDuration="8.196116973s" podCreationTimestamp="2026-01-26 10:36:51 +0000 UTC" firstStartedPulling="2026-01-26 10:36:53.106718318 +0000 UTC m=+5441.755390137" lastFinishedPulling="2026-01-26 10:36:58.578680481 +0000 UTC m=+5447.227352310" observedRunningTime="2026-01-26 10:36:59.191811465 +0000 UTC m=+5447.840483294" watchObservedRunningTime="2026-01-26 10:36:59.196116973 +0000 UTC m=+5447.844788802" Jan 26 10:37:01 crc kubenswrapper[4827]: I0126 10:37:01.864891 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:01 crc kubenswrapper[4827]: I0126 10:37:01.865970 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:02 crc kubenswrapper[4827]: I0126 10:37:02.927905 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k246p" podUID="e53389c5-fdff-46bb-b086-db7873bc8e0b" containerName="registry-server" probeResult="failure" output=< Jan 26 10:37:02 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 26 10:37:02 crc kubenswrapper[4827]: > Jan 26 10:37:03 crc kubenswrapper[4827]: I0126 10:37:03.667887 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kqgh6/must-gather-rz4d5"] Jan 26 10:37:03 crc kubenswrapper[4827]: I0126 10:37:03.668248 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" podUID="249f5c89-068c-4736-a6a2-29f200f4b201" containerName="copy" containerID="cri-o://575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f" gracePeriod=2 Jan 26 10:37:03 crc kubenswrapper[4827]: I0126 10:37:03.681919 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kqgh6/must-gather-rz4d5"] Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.128098 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kqgh6_must-gather-rz4d5_249f5c89-068c-4736-a6a2-29f200f4b201/copy/0.log" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.129300 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.221355 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kqgh6_must-gather-rz4d5_249f5c89-068c-4736-a6a2-29f200f4b201/copy/0.log" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.221773 4827 generic.go:334] "Generic (PLEG): container finished" podID="249f5c89-068c-4736-a6a2-29f200f4b201" containerID="575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f" exitCode=143 Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.221820 4827 scope.go:117] "RemoveContainer" containerID="575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.221852 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kqgh6/must-gather-rz4d5" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.238672 4827 scope.go:117] "RemoveContainer" containerID="2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.263432 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45g9k\" (UniqueName: \"kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k\") pod \"249f5c89-068c-4736-a6a2-29f200f4b201\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.263578 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output\") pod \"249f5c89-068c-4736-a6a2-29f200f4b201\" (UID: \"249f5c89-068c-4736-a6a2-29f200f4b201\") " Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.275526 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k" (OuterVolumeSpecName: "kube-api-access-45g9k") pod "249f5c89-068c-4736-a6a2-29f200f4b201" (UID: "249f5c89-068c-4736-a6a2-29f200f4b201"). InnerVolumeSpecName "kube-api-access-45g9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.285012 4827 scope.go:117] "RemoveContainer" containerID="575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f" Jan 26 10:37:04 crc kubenswrapper[4827]: E0126 10:37:04.286134 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f\": container with ID starting with 575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f not found: ID does not exist" containerID="575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.286180 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f"} err="failed to get container status \"575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f\": rpc error: code = NotFound desc = could not find container \"575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f\": container with ID starting with 575cb354d9b47daa7dae6ac9e9eab324a30d910ae9d6f1de2f0dd55b176ac77f not found: ID does not exist" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.286202 4827 scope.go:117] "RemoveContainer" containerID="2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725" Jan 26 10:37:04 crc kubenswrapper[4827]: E0126 10:37:04.286409 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725\": container with ID starting with 2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725 not found: ID does not exist" containerID="2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.286435 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725"} err="failed to get container status \"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725\": rpc error: code = NotFound desc = could not find container \"2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725\": container with ID starting with 2e1b44c94e582f5cbab2c10d184a06ca0da0acedcbb69a431559a7b5834d9725 not found: ID does not exist" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.366412 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45g9k\" (UniqueName: \"kubernetes.io/projected/249f5c89-068c-4736-a6a2-29f200f4b201-kube-api-access-45g9k\") on node \"crc\" DevicePath \"\"" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.378445 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "249f5c89-068c-4736-a6a2-29f200f4b201" (UID: "249f5c89-068c-4736-a6a2-29f200f4b201"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:37:04 crc kubenswrapper[4827]: I0126 10:37:04.467750 4827 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/249f5c89-068c-4736-a6a2-29f200f4b201-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 10:37:05 crc kubenswrapper[4827]: I0126 10:37:05.713182 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249f5c89-068c-4736-a6a2-29f200f4b201" path="/var/lib/kubelet/pods/249f5c89-068c-4736-a6a2-29f200f4b201/volumes" Jan 26 10:37:11 crc kubenswrapper[4827]: I0126 10:37:11.957044 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:12 crc kubenswrapper[4827]: I0126 10:37:12.035351 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:12 crc kubenswrapper[4827]: I0126 10:37:12.212213 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:37:12 crc kubenswrapper[4827]: I0126 10:37:12.269040 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:37:12 crc kubenswrapper[4827]: I0126 10:37:12.269174 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.302046 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k246p" podUID="e53389c5-fdff-46bb-b086-db7873bc8e0b" containerName="registry-server" containerID="cri-o://c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1" gracePeriod=2 Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.817880 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.962226 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content\") pod \"e53389c5-fdff-46bb-b086-db7873bc8e0b\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.962278 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q2hl\" (UniqueName: \"kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl\") pod \"e53389c5-fdff-46bb-b086-db7873bc8e0b\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.963377 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities\") pod \"e53389c5-fdff-46bb-b086-db7873bc8e0b\" (UID: \"e53389c5-fdff-46bb-b086-db7873bc8e0b\") " Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.964438 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities" (OuterVolumeSpecName: "utilities") pod "e53389c5-fdff-46bb-b086-db7873bc8e0b" (UID: "e53389c5-fdff-46bb-b086-db7873bc8e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:37:13 crc kubenswrapper[4827]: I0126 10:37:13.968810 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl" (OuterVolumeSpecName: "kube-api-access-5q2hl") pod "e53389c5-fdff-46bb-b086-db7873bc8e0b" (UID: "e53389c5-fdff-46bb-b086-db7873bc8e0b"). InnerVolumeSpecName "kube-api-access-5q2hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.066455 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q2hl\" (UniqueName: \"kubernetes.io/projected/e53389c5-fdff-46bb-b086-db7873bc8e0b-kube-api-access-5q2hl\") on node \"crc\" DevicePath \"\"" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.066505 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.084121 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e53389c5-fdff-46bb-b086-db7873bc8e0b" (UID: "e53389c5-fdff-46bb-b086-db7873bc8e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.167983 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e53389c5-fdff-46bb-b086-db7873bc8e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.310974 4827 generic.go:334] "Generic (PLEG): container finished" podID="e53389c5-fdff-46bb-b086-db7873bc8e0b" containerID="c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1" exitCode=0 Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.311014 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerDied","Data":"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1"} Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.311042 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k246p" event={"ID":"e53389c5-fdff-46bb-b086-db7873bc8e0b","Type":"ContainerDied","Data":"16f9edd4762bb3e22235d62ae2018ae749c3ccc8fbcffea9a1b0562ce0a0e5a4"} Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.311039 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k246p" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.311059 4827 scope.go:117] "RemoveContainer" containerID="c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.330202 4827 scope.go:117] "RemoveContainer" containerID="4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.369329 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.372593 4827 scope.go:117] "RemoveContainer" containerID="325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.379406 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k246p"] Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.404414 4827 scope.go:117] "RemoveContainer" containerID="c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1" Jan 26 10:37:14 crc kubenswrapper[4827]: E0126 10:37:14.404876 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1\": container with ID starting with c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1 not found: ID does not exist" containerID="c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.404908 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1"} err="failed to get container status \"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1\": rpc error: code = NotFound desc = could not find container \"c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1\": container with ID starting with c8fc6496ac0954329e4f35b6d18498c95b1ce1415feafd5f38f80fb6de631ec1 not found: ID does not exist" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.404930 4827 scope.go:117] "RemoveContainer" containerID="4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1" Jan 26 10:37:14 crc kubenswrapper[4827]: E0126 10:37:14.405268 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1\": container with ID starting with 4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1 not found: ID does not exist" containerID="4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.405311 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1"} err="failed to get container status \"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1\": rpc error: code = NotFound desc = could not find container \"4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1\": container with ID starting with 4924f71e768a937b044f31f8fc3866f7b7733bbe1c75a442a523c2ca505cc8a1 not found: ID does not exist" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.405335 4827 scope.go:117] "RemoveContainer" containerID="325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8" Jan 26 10:37:14 crc kubenswrapper[4827]: E0126 10:37:14.406072 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8\": container with ID starting with 325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8 not found: ID does not exist" containerID="325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8" Jan 26 10:37:14 crc kubenswrapper[4827]: I0126 10:37:14.406096 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8"} err="failed to get container status \"325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8\": rpc error: code = NotFound desc = could not find container \"325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8\": container with ID starting with 325990692e9e568327d15ce134ea3fbac184630f6c2e90b3c8fd461f3baf11c8 not found: ID does not exist" Jan 26 10:37:15 crc kubenswrapper[4827]: I0126 10:37:15.727117 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53389c5-fdff-46bb-b086-db7873bc8e0b" path="/var/lib/kubelet/pods/e53389c5-fdff-46bb-b086-db7873bc8e0b/volumes" Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.269005 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.269432 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.269482 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.270277 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d486373c12cc9aceb8fe5211fbe2735239f26e572f0a98f65e78d07d605e632"} pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.270333 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" containerID="cri-o://0d486373c12cc9aceb8fe5211fbe2735239f26e572f0a98f65e78d07d605e632" gracePeriod=600 Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.614151 4827 generic.go:334] "Generic (PLEG): container finished" podID="ef39dc20-499c-4665-9555-481361ffe06d" containerID="0d486373c12cc9aceb8fe5211fbe2735239f26e572f0a98f65e78d07d605e632" exitCode=0 Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.614186 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerDied","Data":"0d486373c12cc9aceb8fe5211fbe2735239f26e572f0a98f65e78d07d605e632"} Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.614546 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" event={"ID":"ef39dc20-499c-4665-9555-481361ffe06d","Type":"ContainerStarted","Data":"d0cded91372a8aa7ae9831ee3620861647bbc0dd76f081a3ac2290402ef211ca"} Jan 26 10:37:42 crc kubenswrapper[4827]: I0126 10:37:42.614572 4827 scope.go:117] "RemoveContainer" containerID="f515317d6b342adfaa6b44b858df947dc8d3e5158882f9e4070019f22a0b1b68" Jan 26 10:39:42 crc kubenswrapper[4827]: I0126 10:39:42.268963 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:39:42 crc kubenswrapper[4827]: I0126 10:39:42.269520 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:40:12 crc kubenswrapper[4827]: I0126 10:40:12.269370 4827 patch_prober.go:28] interesting pod/machine-config-daemon-k9x8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:40:12 crc kubenswrapper[4827]: I0126 10:40:12.270162 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9x8x" podUID="ef39dc20-499c-4665-9555-481361ffe06d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135642233024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135642234017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135626557016524 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135626560015466 5ustar corecore